CN112115831B - Living body detection image preprocessing method - Google Patents
Living body detection image preprocessing method Download PDFInfo
- Publication number
- CN112115831B CN112115831B CN202010948440.5A CN202010948440A CN112115831B CN 112115831 B CN112115831 B CN 112115831B CN 202010948440 A CN202010948440 A CN 202010948440A CN 112115831 B CN112115831 B CN 112115831B
- Authority
- CN
- China
- Prior art keywords
- sample set
- image
- self
- portrait
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000001514 detection method Methods 0.000 title claims abstract description 14
- 238000007781 pre-processing Methods 0.000 title claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000003062 neural network model Methods 0.000 claims description 4
- 238000001574 biopsy Methods 0.000 claims 2
- 238000012549 training Methods 0.000 abstract description 11
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004080 punching Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The living body detection image preprocessing method comprises the following steps: step S1: self-adaptive image face interception: collecting partial human faces in an application scene, calculating the size of a human face frame, designing a self-adaptive frame range according to the size of the human face frame, intercepting a human face image according to the self-adaptive frame range, and processing to obtain an intermediate image; step S2: establishing an image sample set: an intermediate image and a positive sample set and a negative sample set formed by randomly copying the intermediate image into a plurality of images or randomly copying the intermediate image into a non-center region of the plurality of images; step S3: processing the image sample set: and carrying out data enhancement on the positive sample set and the negative sample set, and keeping the data volume of the positive sample set and the negative sample set consistent. The method can solve the technical problems that in the prior art, shot picture information is lost and cannot be used for model training and the number of images used for training in the current application scene is insufficient.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a living body detection image preprocessing method.
Background
Along with development and popularization of technology, face recognition technology has been widely applied to various scenes in our lives, such as mobile phone login, community access control, check-in and card punching. In order to prevent other people from using fake faces such as photos, screens, models, etc., it is important to detect whether the current user is a real user, i.e., a living body detection technique. However, the current popular living body detection technology is mainly based on a binocular camera, the mature technology under a monocular camera is not much, for example, the technology is judged according to the characteristics of brightness, texture and the like, or a user is allowed to make a specified action, or a sequence of different-color lights are used for simulating structural light, and the technologies may have a certain effect at a mobile phone end, but other public scenes at a non-mobile phone end can be used, because the human face distance is far, imaging is not clear enough, and the living body detection effect of the existing method can be drastically reduced, so that an algorithm cannot be used.
In the conventional living body detection technology in the scene, two cameras (combined to judge, in order to reduce the cost and improve the application universality, more and more manufacturers begin to pay attention to the monocular camera living body detection technology, namely, only one common color camera is used.
The current popular schemes mainly include:
1. allowing the user to make specified actions (nod, swivel, etc.);
2. directly extracting face image features (brightness, texture, edges and the like);
3. the color illumination sequence simulates structured light.
Scheme 1 requires users to cooperate to make actions, and is not suitable for public places because of poor user experience.
The schemes 2 and 3 are mainly applied to the mobile phone end, and because the face of the mobile phone end is very close to the camera, the definition of the mobile phone camera is very high, and the acquired face picture can well extract the corresponding characteristics. On some public devices at non-mobile phone ends, these conditions cannot be satisfied, such as a camera with a higher erection position, or an interactive large screen terminal, face pictures in these scenes are often not clear enough, and because the face distance is far, information such as brightness and color illumination is basically lost, or accuracy is limited and error is very large, therefore, these picture information cannot be used as effective data to be input into a neural network model for training face recognition, and an effective and fast-convergence and high-accuracy neural network cannot be obtained.
Disclosure of Invention
The invention aims to provide a living body detection image preprocessing method for training a neural network, which can be used for solving the technical problems that in the prior art, shot picture information is lost and cannot be used for model training and the number of images used for training in the current application scene is insufficient.
The living body detection image preprocessing method comprises the following steps:
step S1: self-adaptive image face interception: collecting partial human faces in an application scene, calculating the size of a human face frame, designing a self-adaptive frame range according to the size of the human face frame, intercepting a human face image according to the self-adaptive frame range, and processing to obtain an intermediate image;
step S2: establishing an image sample set: an intermediate image and a positive sample set and a negative sample set formed by randomly copying the intermediate image into a plurality of images or randomly copying the intermediate image into a non-center region of the plurality of images;
step S3: processing the image sample set: and carrying out data enhancement on the positive sample set and the negative sample set, and keeping the data volume of the positive sample set and the negative sample set consistent.
According to the invention, the self-adaptive frame range is designed by taking pictures shot in the market as a reference according to the actual application scene, for example, the self-adaptive frame range is designed, and the subsequent random images are intercepted according to the self-adaptive frame range, so that on one hand, the uniformity of the intercepted all image areas can be ensured, the calculation efficiency is improved, and on the other hand, the period of the post neural network in training can be reduced. In addition, by intercepting the images in the application scene, two opposite image sets are generated in a specific mode, so that on one hand, the number of images for training is increased, and on the other hand, the two opposite positive image sets and the negative image set are both established on the basis of the portrait area in the application scene, and the judgment accuracy of the neural network can be improved.
The invention also uses the generated image set to simulate the actual image shot in the application scene to train the neural network by generating the new image set, thereby overcoming the problem that the image information loss in the prior art cannot be effectively utilized.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic view of an application scenario of the adaptive frame scope of the present invention;
FIG. 3 is a schematic diagram of a graphical sample set creation process of the present invention;
FIGS. 4-6 are schematic diagrams of a portion of a sample creation process in a graphic sample set in accordance with the present invention;
fig. 7 is a data schematic of a positive sample set and a negative sample set of the present invention.
Detailed Description
The invention is further illustrated and described below in conjunction with the specific embodiments and the accompanying drawings:
referring to fig. 1, the invention discloses a living body detection image preprocessing method, which comprises the following steps:
referring to fig. 2, step S1: self-adaptive image face interception: and collecting partial human faces in the application scene, calculating the sizes of human face frames (a frame 1 and a frame 2 and … frame 5 in the figure), designing an adaptive frame range according to the sizes of the human face frames, intercepting human face images according to the adaptive frame range, and processing to obtain an intermediate image.
After the face frame is detected, the range near the face is firstly intercepted. The size of the intercepting range needs to be automatically adjusted according to different application scenes.
For different application scenes, a small part of original pictures with faces are collected, face frames are detected through a face recognition system, and the average value of the sizes of all the face frames is calculated. Let the height of the original picture be h, the width be w, the average width of the detected face frame be m, the average height n, and the size coefficient s of the interception range is determined according to the following formula:
based on this coefficient, for each face frame, the upper left corner coordinates thereof are set to (x 0 ,y 0 ) The face frame width is a 0 Height b 0 Then the upper left corner (x 1 ,y 1 ) And intercept width a 1 High b 1 The values of (2) are:
x 1 =x 0 -a 0 ×s
y 1 =y 0 -b 0 ×s
a 1 =a 0 +a 0 ×s×2
b 1 =b 0 +b 0 ×s×2
according to the upper left corner (x 1 ,y 1 ) And intercept width a 1 High b 1 The value of (2) gets the size of the adaptive frame range.
Further, the processing of the truncated face image in step S1 includes: judging whether the face image intercepted by utilizing the self-adaptive frame range exceeds the original image range, and if so, filling pixels; and taking the image filled out of the pixel range or the image not exceeding the effective range as the intermediate image.
It should be noted that, for a face frame at the edge of an image, the coordinate value of the interception range obtained according to the self-adaptive frame range may exceed the range of the original image, the edge of the original image is intercepted first, then the insufficient range is filled with all 0 pixel values, the intercepted image is guaranteed not to stretch and deform, the face is always centered, the characteristics are relatively uniform, thus being beneficial to the creation of a new image in the later stage, guaranteeing that the created new image is not deformed, improving the uniformity and shortening the training time of the neural network model.
Step S2: establishing an image sample set: an intermediate image and positive and negative sample sets formed by randomly copying the intermediate image into the plurality of images or randomly copying the intermediate image into non-central regions of the plurality of images.
Referring to fig. 3, the method for creating a graphic sample set includes:
as shown in fig. 4, step S21: collecting pictures containing the portrait in the application scene, taking the pictures as a sample set A1 by utilizing the self-adaptive frame, collecting the pictures containing the portrait and the contrast object in the application scene, and taking the pictures as a sample set B1 by utilizing the self-adaptive frame;
the positive sample set includes the sample set A1, and the negative sample set includes the sample set B1.
Referring to fig. 5, step S22: a random image set is input, wherein the random image set comprises a portrait image set and a background image set;
step S23: collecting a portrait region X1 in the portrait image set by using a self-adaptive frame, and randomly pasting the portrait region X1 into pictures in the background image set to obtain a sample set A2;
step S24: collecting random portrait areas X2 in the sample set A1 and the sample set A2 by utilizing an adaptive frame, and pasting the portrait areas X2 to the central areas of the sample set A1 and the sample set A2 to obtain a sample B2;
the positive sample set includes the sample set A1 and the sample set A2, and the negative sample set includes the sample set B1 and the sample set B2.
Referring to fig. 6, step S25 is to collect a random portrait area X3 in the sample set A1 and the sample set A2 by using an adaptive frame, and paste the portrait area X3 to a central area of the background atlas to obtain a sample B3;
step S26: collecting random portrait areas X4 in the sample set A1 and the sample set A2 by utilizing an adaptive frame, and pasting the portrait areas X4 to non-central areas of the sample set A1 and the sample set A2 to obtain a sample A3;
as shown in fig. 7, the sample set A1, the sample set A2, and the sample set A3 are combined into the positive sample set, and the sample set B1, the sample set B2, and the sample set B3 are combined into the negative sample set.
Step S3: processing the image sample set: and carrying out data enhancement on the positive sample set and the negative sample set, and keeping the data volume of the positive sample set and the negative sample set consistent.
According to the invention, the self-adaptive frame range is designed by taking pictures shot in the market as a reference according to the actual application scene, for example, the self-adaptive frame range is designed, and the subsequent random images are intercepted according to the self-adaptive frame range, so that on one hand, the uniformity of the intercepted all image areas can be ensured, the calculation efficiency is improved, and on the other hand, the period of the post neural network in training can be reduced. In addition, by intercepting the images in the application scene, two opposite image sets are generated in a specific mode, so that on one hand, the number of images for training is increased, and on the other hand, the two opposite positive image sets and the negative image set are both established on the basis of the portrait area in the application scene, and the judgment accuracy of the neural network can be improved.
The invention also uses the generated image set to simulate the actual image shot in the application scene to train the neural network by generating the new image set, thereby overcoming the problem that the image information loss in the prior art cannot be effectively utilized.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (3)
1. The living body detection image preprocessing method is characterized by comprising the following steps of:
step S1: self-adaptive image face interception: collecting partial human faces in an application scene, calculating the size of a human face frame, designing a self-adaptive frame range according to the size of the human face frame, intercepting a human face image according to the self-adaptive frame range, and processing to obtain an intermediate image;
step S2: establishing an image sample set: an intermediate image and positive and negative sample sets of intermediate images randomly copied into a plurality of images or non-central regions of the plurality of images, the method of creating an image sample set comprising:
step S21: collecting pictures containing the portrait in the application scene, taking the pictures as a sample set A1 by utilizing the self-adaptive frame, collecting the pictures containing the portrait and the contrast object in the application scene, and taking the pictures as a sample set B1 by utilizing the self-adaptive frame;
step S22: a random image set is input, wherein the random image set comprises a portrait image set and a background image set;
step S23: collecting a portrait region X1 in the portrait image set by using a self-adaptive frame, and randomly pasting the portrait region X1 into pictures in the background image set to obtain a sample set A2;
step S24: collecting random portrait areas X2 in the sample set A1 and the sample set A2 by utilizing an adaptive frame, and pasting the portrait areas X2 to the central areas of the sample set A1 and the sample set A2 to obtain a sample set B2;
the positive sample set comprises the sample set A1 and the sample set A2, and the negative sample set comprises the sample set B1 and the sample set B2;
step S3: processing the image sample set: and carrying out data enhancement on the positive sample set and the negative sample set, and keeping the data volume of the positive sample set and the negative sample set consistent.
2. The living body detection image preprocessing method according to claim 1, wherein the processing of the truncated face image in step S1 includes: judging whether the face image intercepted by utilizing the self-adaptive frame range exceeds the original image range, and if so, filling pixels; and taking the image filled out of the pixel range or the image not exceeding the effective range as the intermediate image.
3. A biopsy image pre-processing method according to claim 1, wherein the method is for forming the image sample set and inputting the image sample set into a neural network model to train the neural network model to enable biopsy identification of a face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010948440.5A CN112115831B (en) | 2020-09-10 | 2020-09-10 | Living body detection image preprocessing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010948440.5A CN112115831B (en) | 2020-09-10 | 2020-09-10 | Living body detection image preprocessing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112115831A CN112115831A (en) | 2020-12-22 |
CN112115831B true CN112115831B (en) | 2024-03-15 |
Family
ID=73803167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010948440.5A Active CN112115831B (en) | 2020-09-10 | 2020-09-10 | Living body detection image preprocessing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112115831B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818313A (en) * | 2017-11-20 | 2018-03-20 | 腾讯科技(深圳)有限公司 | Vivo identification method, device, storage medium and computer equipment |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109858375A (en) * | 2018-12-29 | 2019-06-07 | 深圳市软数科技有限公司 | Living body faces detection method, terminal and computer readable storage medium |
CN110569808A (en) * | 2019-09-11 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Living body detection method and device and computer equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11663307B2 (en) * | 2018-09-24 | 2023-05-30 | Georgia Tech Research Corporation | RtCaptcha: a real-time captcha based liveness detection system |
CN110414437A (en) * | 2019-07-30 | 2019-11-05 | 上海交通大学 | Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion |
CN111524145B (en) * | 2020-04-13 | 2024-06-04 | 北京智慧章鱼科技有限公司 | Intelligent picture cropping method, intelligent picture cropping system, computer equipment and storage medium |
-
2020
- 2020-09-10 CN CN202010948440.5A patent/CN112115831B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818313A (en) * | 2017-11-20 | 2018-03-20 | 腾讯科技(深圳)有限公司 | Vivo identification method, device, storage medium and computer equipment |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109858375A (en) * | 2018-12-29 | 2019-06-07 | 深圳市软数科技有限公司 | Living body faces detection method, terminal and computer readable storage medium |
CN110569808A (en) * | 2019-09-11 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Living body detection method and device and computer equipment |
Non-Patent Citations (2)
Title |
---|
Paste and Learn: Surprisingly Easy Synthesis for Instance Detection;Debidatta Dwibedi.et al;《2017 IEEE International Conference on Computer Vision》;正文第4-6节、图1-2和表2 * |
基于网络摄像头的活体人脸识别系统设计;王春江等;《电子科技》;第33卷(第6期);第63-68页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112115831A (en) | 2020-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11830230B2 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN107993216B (en) | Image fusion method and equipment, storage medium and terminal thereof | |
CN110929569B (en) | Face recognition method, device, equipment and storage medium | |
WO2022001509A1 (en) | Image optimisation method and apparatus, computer storage medium, and electronic device | |
CN110610526B (en) | Method for segmenting monocular image and rendering depth of field based on WNET | |
CN111476710B (en) | Video face changing method and system based on mobile platform | |
CN111753782B (en) | False face detection method and device based on double-current network and electronic equipment | |
CN111985281B (en) | Image generation model generation method and device and image generation method and device | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
CN110674759A (en) | Monocular face in-vivo detection method, device and equipment based on depth map | |
CN103955889B (en) | Drawing-type-work reviewing method based on augmented reality technology | |
CN110443252A (en) | A kind of character detecting method, device and equipment | |
CN113160231A (en) | Sample generation method, sample generation device and electronic equipment | |
CN109784215B (en) | In-vivo detection method and system based on improved optical flow method | |
CN117496019B (en) | Image animation processing method and system for driving static image | |
CN115082992A (en) | Face living body detection method and device, electronic equipment and readable storage medium | |
WO2024198475A1 (en) | Face anti-spoofing recognition method and apparatus, and electronic device and storage medium | |
CN115731591A (en) | Method, device and equipment for detecting makeup progress and storage medium | |
CN111126283B (en) | Rapid living body detection method and system for automatically filtering fuzzy human face | |
CN117409463A (en) | Live broadcast strategy management system | |
CN112115831B (en) | Living body detection image preprocessing method | |
CN115690934A (en) | Master and student attendance card punching method and device based on batch face recognition | |
CN113837018B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
CN113837020B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
CN115082960A (en) | Image processing method, computer device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |