CN111144310A - Face detection method and system based on multi-layer information fusion - Google Patents
Face detection method and system based on multi-layer information fusion Download PDFInfo
- Publication number
- CN111144310A CN111144310A CN201911373204.9A CN201911373204A CN111144310A CN 111144310 A CN111144310 A CN 111144310A CN 201911373204 A CN201911373204 A CN 201911373204A CN 111144310 A CN111144310 A CN 111144310A
- Authority
- CN
- China
- Prior art keywords
- feature map
- convolution
- convolution operation
- feature
- face detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face detection method based on multilayer information fusion, which relates to the technical field of face detection and comprises the following steps: extracting the features of a face image to be detected according to a pre-generated neural network model to obtain an initial feature map; performing a first convolution operation on the initial characteristic diagram to obtain a first characteristic diagram; performing second convolution operation on the initial characteristic diagram to obtain a second characteristic diagram; respectively performing third convolution operation and fourth convolution operation on the second feature map to obtain a corresponding third feature map and a corresponding fourth feature map; performing a fifth convolution operation on the third feature map to obtain a fifth feature map; performing first up-sampling on the fifth feature map, and performing feature fusion on the first up-sampling result and the fourth feature map to obtain a sixth feature map; and performing second up-sampling on the sixth feature map, performing feature fusion on the second up-sampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map. The invention effectively improves the accuracy of face detection.
Description
Technical Field
The invention relates to the technical field of face detection, in particular to a face detection method and system based on multilayer information fusion.
Background
The human face detection belongs to a basic research direction in the field of computer vision, the human face detection is applied to daily life of people, most of the current mainstream human face detection methods are based on deep learning, and the main process is to extract image characteristics through a deep learning network aiming at an input image and judge whether a human face and a human face position exist according to the characteristics.
With the development of CNN, most face detection is currently based on convolutional neural network, and a quite good effect is achieved. In the process of face detection, the convolutional neural network has the characteristics of a high-level convolutional layer rich in abundant semantic information and a low-level convolutional layer rich in abundant detail information.
Disclosure of Invention
The invention aims to provide a face detection method and system based on multi-layer information fusion.
In order to achieve the purpose, the invention adopts the following technical scheme:
the method for detecting the human face based on the multi-layer information fusion specifically comprises the following steps:
step S1, extracting the features of the face image to be detected according to a pre-generated neural network model to obtain an initial feature map;
step S2, performing a first convolution operation on the initial characteristic diagram to obtain a first characteristic diagram;
step S3, performing a second convolution operation on the initial characteristic diagram to obtain a second characteristic diagram;
step S4, performing a third convolution operation and a fourth convolution operation on the second feature map respectively to obtain a corresponding third feature map and a corresponding fourth feature map;
step S5, performing a fifth convolution operation on the third feature map to obtain a fifth feature map;
step S6, performing first upsampling on the fifth feature map, and performing feature fusion on the first upsampling result and the fourth feature map to obtain a sixth feature map;
and step S7, performing second upsampling on the sixth feature map, performing feature fusion on a second upsampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
As a preferred aspect of the present invention, the convolution kernel size of the first convolution operation is 3 × 3.
As a preferred aspect of the present invention, the convolution kernel size of the second convolution operation is 3 × 3, and the step size is 2.
As a preferred aspect of the present invention, the convolution kernel size of the third convolution operation is 3 × 3, and the step size is 2.
As a preferred aspect of the present invention, the convolution kernel size of the fourth convolution operation is 3 × 3.
As a preferred aspect of the present invention, the convolution kernel size of the fifth convolution operation is 1 × 1.
As a preferable aspect of the present invention, the first upsampling is 2 times upsampling.
As a preferable aspect of the present invention, the second upsampling is 2 times upsampling.
A face detection system based on multilayer information fusion applies any one of the above face detection methods based on multilayer information fusion, and specifically comprises:
the characteristic extraction module is used for extracting the characteristics of the face image to be detected according to a pre-generated neural network model to obtain an initial characteristic diagram;
the first convolution module is connected with the feature extraction module and used for performing first convolution operation on the initial feature map to obtain a first feature map;
the second convolution module is connected with the feature extraction module and used for performing second convolution operation on the initial feature map to obtain a second feature map;
the third convolution module is connected with the second convolution module and is used for respectively performing third convolution operation and fourth convolution operation on the second feature map to obtain a corresponding third feature map and a corresponding fourth feature map;
the fourth convolution module is connected with the third convolution module and used for performing fifth convolution operation on the third feature map to obtain a fifth feature map;
the fifth convolution module is connected with the fourth convolution module and used for performing first up-sampling on the fifth feature map and performing feature fusion on the first up-sampling result and the fourth feature map to obtain a sixth feature map;
and the sixth convolution module is connected with the fifth convolution module and used for performing second up-sampling on the sixth feature map, performing feature fusion on a second up-sampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
The invention has the beneficial effects that: the method comprises the steps of considering the relation among the features of different convolutional layers, coupling the features of the different convolutional layers together, generating new improved features, and utilizing the improved features to detect the face, so that the accuracy of the face detection is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow chart of a face detection method based on multi-layer information fusion according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a principle of a face detection method based on multi-layer information fusion according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a face detection system based on multi-layer information fusion according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Based on the technical problems in the prior art, the invention provides a face detection method based on multi-layer information fusion, as shown in fig. 1, which specifically comprises the following steps:
step S1, extracting the features of the face image to be detected according to a pre-generated neural network model to obtain an initial feature map;
step S2, performing a first convolution operation on the initial characteristic diagram to obtain a first characteristic diagram;
step S3, performing a second convolution operation on the initial characteristic diagram to obtain a second characteristic diagram;
step S4, performing a third convolution operation and a fourth convolution operation on the second feature map respectively to obtain a corresponding third feature map and a corresponding fourth feature map;
step S5, performing a fifth convolution operation on the third feature map to obtain a fifth feature map;
step S6, performing first upsampling on the fifth feature map, and performing feature fusion on the first upsampling result and the fourth feature map to obtain a sixth feature map;
and step S7, performing second upsampling on the sixth feature map, performing feature fusion on the second upsampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
Specifically, in this embodiment, when performing face detection, the face detection method based on multi-layer information fusion of the present invention improves the existing features to perform face detection by associating the relationship between the high-level features and the low-level features, which is helpful to improve the performance of the current face detector.
Further, as shown in fig. 2, the face detection and accuracy are effectively improved by further feature optimization of the initial feature map. The feature optimization specifically includes obtaining two higher layers of information by Conv (convolution) with two convolution kernels having a size of 3 × 3 and a step size (stride) of 2, unifying the 3 different layers of information into the same value domain by 3x3Conv (convolution with a convolution kernel size of 3 × 3), 1x1Conv (convolution with a convolution kernel size of 1 × 1), and finally fusing the two different layers of information together by 2x up (2 times up sampling) and sum (feature fusion).
As a preferred aspect of the present invention, the convolution kernel size of the first convolution operation is 3 × 3.
As a preferred aspect of the present invention, the convolution kernel size of the second convolution operation is 3 × 3 and the step size is 2.
As a preferred aspect of the present invention, the convolution kernel size of the third convolution operation is 3 × 3 and the step size is 2.
As a preferred aspect of the present invention, the convolution kernel size of the fourth convolution operation is 3 × 3.
As a preferred aspect of the present invention, the convolution kernel size of the fifth convolution operation is 1 × 1.
As a preferable aspect of the present invention, the first upsampling is 2 times upsampling.
As a preferred aspect of the present invention, the second upsampling is 2 times upsampling.
A face detection system based on multi-layer information fusion, which applies any one of the above face detection methods based on multi-layer information fusion, as shown in fig. 3, specifically includes:
the characteristic extraction module 1 is used for extracting the characteristics of a face image to be detected according to a pre-generated neural network model to obtain an initial characteristic diagram;
the first convolution module 2 is connected with the feature extraction module 1 and is used for performing first convolution operation on the initial feature map to obtain a first feature map;
the second convolution module 3 is connected with the feature extraction module 1 and is used for performing second convolution operation on the initial feature map to obtain a second feature map;
the third convolution module 4 is connected with the second convolution module 3 and is used for respectively performing third convolution operation and fourth convolution operation on the second feature map to obtain a corresponding third feature map and a corresponding fourth feature map;
the fourth convolution module 5 is connected with the third convolution module 4 and is used for performing fifth convolution operation on the third feature map to obtain a fifth feature map;
the fifth convolution module 6 is connected with the fourth convolution module 5 and is used for performing first up-sampling on the fifth feature map and performing feature fusion on the first up-sampling result and the fourth feature map to obtain a sixth feature map;
and the sixth convolution module 7 is connected with the fifth convolution module 6 and is used for performing second upsampling on the sixth feature map, performing feature fusion on the second upsampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.
Claims (9)
1. A face detection method based on multi-layer information fusion is characterized by comprising the following steps:
step S1, extracting the features of the face image to be detected according to a pre-generated neural network model to obtain an initial feature map;
step S2, performing a first convolution operation on the initial characteristic diagram to obtain a first characteristic diagram;
step S3, performing a second convolution operation on the initial characteristic diagram to obtain a second characteristic diagram;
step S4, performing a third convolution operation and a fourth convolution operation on the second feature map respectively to obtain a corresponding third feature map and a corresponding fourth feature map;
step S5, performing a fifth convolution operation on the third feature map to obtain a fifth feature map;
step S6, performing first upsampling on the fifth feature map, and performing feature fusion on the first upsampling result and the fourth feature map to obtain a sixth feature map;
and step S7, performing second upsampling on the sixth feature map, performing feature fusion on a second upsampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
2. The method for detecting the human face based on the multi-layer information fusion of claim 1, wherein the convolution kernel size of the first convolution operation is 3x 3.
3. The method according to claim 1, wherein the convolution kernel size of the second convolution operation is 3x3 and the step size is 2.
4. The method according to claim 1, wherein the convolution kernel size of the third convolution operation is 3x3 and the step size is 2.
5. The method for detecting a human face based on multi-layer information fusion according to claim 1, wherein the convolution kernel size of the fourth convolution operation is 3x 3.
6. The method for detecting a human face based on multi-layer information fusion according to claim 1, wherein the convolution kernel size of the fifth convolution operation is 1x 1.
7. The method for detecting human face based on multi-layer information fusion of claim 1, wherein the first upsampling is 2 times upsampling.
8. The method for detecting human face based on multi-layer information fusion of claim 1, wherein the second upsampling is 2 times upsampling.
9. A face detection system based on multi-layer information fusion, which is characterized by applying the face detection method based on multi-layer information fusion according to any one of claims 1 to 8, and specifically comprises:
the characteristic extraction module is used for extracting the characteristics of the face image to be detected according to a pre-generated neural network model to obtain an initial characteristic diagram;
the first convolution module is connected with the feature extraction module and used for performing first convolution operation on the initial feature map to obtain a first feature map;
the second convolution module is connected with the feature extraction module and used for performing second convolution operation on the initial feature map to obtain a second feature map;
the third convolution module is connected with the second convolution module and is used for respectively performing third convolution operation and fourth convolution operation on the second feature map to obtain a corresponding third feature map and a corresponding fourth feature map;
the fourth convolution module is connected with the third convolution module and used for performing fifth convolution operation on the third feature map to obtain a fifth feature map;
the fifth convolution module is connected with the fourth convolution module and used for performing first up-sampling on the fifth feature map and performing feature fusion on the first up-sampling result and the fourth feature map to obtain a sixth feature map;
and the sixth convolution module is connected with the fifth convolution module and used for performing second up-sampling on the sixth feature map, performing feature fusion on a second up-sampling result and the first feature map to obtain a seventh feature map, and performing face detection according to the seventh feature map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911373204.9A CN111144310A (en) | 2019-12-27 | 2019-12-27 | Face detection method and system based on multi-layer information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911373204.9A CN111144310A (en) | 2019-12-27 | 2019-12-27 | Face detection method and system based on multi-layer information fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111144310A true CN111144310A (en) | 2020-05-12 |
Family
ID=70521287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911373204.9A Withdrawn CN111144310A (en) | 2019-12-27 | 2019-12-27 | Face detection method and system based on multi-layer information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144310A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560701A (en) * | 2020-12-17 | 2021-03-26 | 成都新潮传媒集团有限公司 | Face image extraction method and device and computer storage medium |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200842733A (en) * | 2007-04-17 | 2008-11-01 | Univ Nat Chiao Tung | Object image detection method |
CN107578054A (en) * | 2017-09-27 | 2018-01-12 | 北京小米移动软件有限公司 | Image processing method and device |
WO2018072102A1 (en) * | 2016-10-18 | 2018-04-26 | 华为技术有限公司 | Method and apparatus for removing spectacles in human face image |
CN108182384A (en) * | 2017-12-07 | 2018-06-19 | 浙江大华技术股份有限公司 | A kind of man face characteristic point positioning method and device |
US20180182377A1 (en) * | 2016-12-28 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and device for extracting speech feature based on artificial intelligence |
CN108229296A (en) * | 2017-09-30 | 2018-06-29 | 深圳市商汤科技有限公司 | The recognition methods of face skin attribute and device, electronic equipment, storage medium |
KR20180080081A (en) * | 2017-01-03 | 2018-07-11 | 한국과학기술원 | Method and system for robust face dectection in wild environment based on cnn |
KR101913140B1 (en) * | 2017-12-27 | 2018-10-30 | 인천대학교 산학협력단 | Apparatus and method for Optimizing Continuous Features in Industrial Surveillance using Big Data in the Internet of Things |
CN108985181A (en) * | 2018-06-22 | 2018-12-11 | 华中科技大学 | A kind of end-to-end face mask method based on detection segmentation |
CN109101899A (en) * | 2018-07-23 | 2018-12-28 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on convolutional neural networks |
CN109344779A (en) * | 2018-10-11 | 2019-02-15 | 高新兴科技集团股份有限公司 | A kind of method for detecting human face under ring road scene based on convolutional neural networks |
CN109376667A (en) * | 2018-10-29 | 2019-02-22 | 北京旷视科技有限公司 | Object detection method, device and electronic equipment |
CN109829855A (en) * | 2019-01-23 | 2019-05-31 | 南京航空航天大学 | A kind of super resolution ratio reconstruction method based on fusion multi-level features figure |
CN109886066A (en) * | 2018-12-17 | 2019-06-14 | 南京理工大学 | Fast target detection method based on the fusion of multiple dimensioned and multilayer feature |
CN109919013A (en) * | 2019-01-28 | 2019-06-21 | 浙江英索人工智能科技有限公司 | Method for detecting human face and device in video image based on deep learning |
CN110070072A (en) * | 2019-05-05 | 2019-07-30 | 厦门美图之家科技有限公司 | A method of generating object detection model |
CN110163108A (en) * | 2019-04-23 | 2019-08-23 | 杭州电子科技大学 | Robust sonar target detection method based on dual path Fusion Features network |
CN110309706A (en) * | 2019-05-06 | 2019-10-08 | 深圳市华付信息技术有限公司 | Face critical point detection method, apparatus, computer equipment and storage medium |
WO2019223254A1 (en) * | 2018-05-21 | 2019-11-28 | 北京亮亮视野科技有限公司 | Construction method for multi-scale lightweight face detection model and face detection method based on model |
US20190377930A1 (en) * | 2018-06-11 | 2019-12-12 | Zkteco Usa, Llc | Method and System for Face Recognition Via Deep Learning |
CN110598788A (en) * | 2019-09-12 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and storage medium |
-
2019
- 2019-12-27 CN CN201911373204.9A patent/CN111144310A/en not_active Withdrawn
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200842733A (en) * | 2007-04-17 | 2008-11-01 | Univ Nat Chiao Tung | Object image detection method |
WO2018072102A1 (en) * | 2016-10-18 | 2018-04-26 | 华为技术有限公司 | Method and apparatus for removing spectacles in human face image |
US20180182377A1 (en) * | 2016-12-28 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd | Method and device for extracting speech feature based on artificial intelligence |
KR20180080081A (en) * | 2017-01-03 | 2018-07-11 | 한국과학기술원 | Method and system for robust face dectection in wild environment based on cnn |
CN107578054A (en) * | 2017-09-27 | 2018-01-12 | 北京小米移动软件有限公司 | Image processing method and device |
CN108229296A (en) * | 2017-09-30 | 2018-06-29 | 深圳市商汤科技有限公司 | The recognition methods of face skin attribute and device, electronic equipment, storage medium |
CN108182384A (en) * | 2017-12-07 | 2018-06-19 | 浙江大华技术股份有限公司 | A kind of man face characteristic point positioning method and device |
KR101913140B1 (en) * | 2017-12-27 | 2018-10-30 | 인천대학교 산학협력단 | Apparatus and method for Optimizing Continuous Features in Industrial Surveillance using Big Data in the Internet of Things |
WO2019223254A1 (en) * | 2018-05-21 | 2019-11-28 | 北京亮亮视野科技有限公司 | Construction method for multi-scale lightweight face detection model and face detection method based on model |
US20190377930A1 (en) * | 2018-06-11 | 2019-12-12 | Zkteco Usa, Llc | Method and System for Face Recognition Via Deep Learning |
CN108985181A (en) * | 2018-06-22 | 2018-12-11 | 华中科技大学 | A kind of end-to-end face mask method based on detection segmentation |
CN109101899A (en) * | 2018-07-23 | 2018-12-28 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on convolutional neural networks |
CN109344779A (en) * | 2018-10-11 | 2019-02-15 | 高新兴科技集团股份有限公司 | A kind of method for detecting human face under ring road scene based on convolutional neural networks |
CN109376667A (en) * | 2018-10-29 | 2019-02-22 | 北京旷视科技有限公司 | Object detection method, device and electronic equipment |
CN109886066A (en) * | 2018-12-17 | 2019-06-14 | 南京理工大学 | Fast target detection method based on the fusion of multiple dimensioned and multilayer feature |
CN109829855A (en) * | 2019-01-23 | 2019-05-31 | 南京航空航天大学 | A kind of super resolution ratio reconstruction method based on fusion multi-level features figure |
CN109919013A (en) * | 2019-01-28 | 2019-06-21 | 浙江英索人工智能科技有限公司 | Method for detecting human face and device in video image based on deep learning |
CN110163108A (en) * | 2019-04-23 | 2019-08-23 | 杭州电子科技大学 | Robust sonar target detection method based on dual path Fusion Features network |
CN110070072A (en) * | 2019-05-05 | 2019-07-30 | 厦门美图之家科技有限公司 | A method of generating object detection model |
CN110309706A (en) * | 2019-05-06 | 2019-10-08 | 深圳市华付信息技术有限公司 | Face critical point detection method, apparatus, computer equipment and storage medium |
CN110598788A (en) * | 2019-09-12 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
徐亚伟: "基于多层特征深度融合的卷积神经网络人脸识别方法" * |
王成济: "一种多层特征融合的人脸检测方法" * |
石学超: "基于多层特征融合可调监督函数卷积神经网络的人脸性别识别" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560701A (en) * | 2020-12-17 | 2021-03-26 | 成都新潮传媒集团有限公司 | Face image extraction method and device and computer storage medium |
CN112560701B (en) * | 2020-12-17 | 2022-10-25 | 成都新潮传媒集团有限公司 | Face image extraction method and device and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mujahid et al. | Real-time hand gesture recognition based on deep learning YOLOv3 model | |
Chang et al. | MedGlasses: A wearable smart-glasses-based drug pill recognition system using deep learning for visually impaired chronic patients | |
Lin et al. | Image manipulation detection by multiple tampering traces and edge artifact enhancement | |
Amin et al. | A comparative review on applications of different sensors for sign language recognition | |
CN110782420A (en) | Small target feature representation enhancement method based on deep learning | |
CN112860888A (en) | Attention mechanism-based bimodal emotion analysis method | |
WO2021051547A1 (en) | Violent behavior detection method and system | |
Sun et al. | Foodtracker: A real-time food detection mobile application by deep convolutional neural networks | |
US20230080098A1 (en) | Object recognition using spatial and timing information of object images at diferent times | |
Yuan et al. | CurSeg: A pavement crack detector based on a deep hierarchical feature learning segmentation framework | |
Li et al. | Fall detection based on fused saliency maps | |
CN111144310A (en) | Face detection method and system based on multi-layer information fusion | |
CN111444850A (en) | Picture detection method and related device | |
Ma et al. | Dynamic gesture contour feature extraction method using residual network transfer learning | |
Husain et al. | Development and validation of a deep learning-based algorithm for drowsiness detection in facial photographs | |
CN113377193A (en) | Vending machine interaction method and system based on reliable gesture recognition | |
CN117593762A (en) | Human body posture estimation method, device and medium integrating vision and pressure | |
Jadhav et al. | GoogLeNet application towards gesture recognition for ASL character identification | |
CN116052250A (en) | Training method, device, equipment and storage medium of detection model | |
CN115143128A (en) | Fault diagnosis method and system for small submersible electric pump | |
Liu et al. | Multi-scale quaternion CNN and BiGRU with cross self-attention feature fusion for fault diagnosis of bearing | |
El-Din et al. | A proposed context-awareness taxonomy for multi-data fusion in smart environments: Types, properties, and challenges | |
CN114048284A (en) | Construction method and device of reference expression positioning and segmentation model | |
Wen | Graphic Perception System for Visually Impaired Groups | |
CN112418160A (en) | Diner monitoring method based on mutual attention neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200512 |
|
WW01 | Invention patent application withdrawn after publication |