AU2021101197A4 - Facial recognition system based on neural network - Google Patents

Facial recognition system based on neural network Download PDF

Info

Publication number
AU2021101197A4
AU2021101197A4 AU2021101197A AU2021101197A AU2021101197A4 AU 2021101197 A4 AU2021101197 A4 AU 2021101197A4 AU 2021101197 A AU2021101197 A AU 2021101197A AU 2021101197 A AU2021101197 A AU 2021101197A AU 2021101197 A4 AU2021101197 A4 AU 2021101197A4
Authority
AU
Australia
Prior art keywords
region
face
control module
central control
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021101197A
Inventor
Tao Dong
Xiaomei Gong
Zhuoxian ZHANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University
Original Assignee
Southwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University filed Critical Southwest University
Priority to AU2021101197A priority Critical patent/AU2021101197A4/en
Application granted granted Critical
Publication of AU2021101197A4 publication Critical patent/AU2021101197A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a facial recognition system based on neural network, including: shell, camera, display and control module. Before using facial recognition system, users need to input multiple user photos of different angles into the central control module. The central control module recognizes different photos of the same user and obtains face static feature data through a built-in algorithm. When using the facial recognition system, the camera collects face video information and transmits it to the central control module. The central control module selects the most suitable face image and compensates the face image by distance and brightness. In the process of comparison, the face image is divided into several regions and compared one by one with the idea of calculus. The invention comprehensively obtains human facial features through early data recognition, speeds up recognition speed during face recognition, and increases recognition accuracy through calculus and weight distribution. 1/1 2 3 1 ___________ ___________________ 4 7 / Fig.1

Description

1/1
2 3
1 ___________ ___________________ 4
/ 7 Fig.1
A facial recognition system based on neural network
1. Technical Field
The present invention relates to the field of face recognition technology. In particular, it involves a
facial recognition system based on neural network.
2. Background
With the development of computer and network technology in the 21st century, the hidden danger
of information security is becoming more and more prominent. Countries pay more and more attention
to social public security. Meanwhile, information identification and detection have shown
unprecedented importance. Its application field is wide, which can include almost every field of the
society. Nowadays, identification methods such as number, magnetic card and password are mainly
used in life, which are easy to lose, easy to forge, easy to forget and so on. With the continuous
development of technology, traditional identification methods have been increasingly challenged. And
the reliability has been greatly reduced. Therefore, new information identification and detection
technologies are bound to emerge. Increasingly, people are turning their attention to biological signs,
which are determined by human DNA. And each person has his or her own unique biological signs.
Biometrics can be broadly divided into two categories. One is the physical signs of objects, such
as fingerprints, iris, face, body odor and so on. The other is behavioral signs, such as writing, pace,
inertia and so on, all of which can be identified by modem computer image processing technology.
Compared to other human physiological characteristics, face has the advantages of easy collection,
non-contact, static and so on. And it is easy to be accepted by the public.
Face recognition is one of the hot research directions in the field of computer vision, and it has a
wide range of applications. For example, it plays an important role in security, monitoring,
entertainment and other fields.
According to a survey, 90% of people get to know each other and get basic information about each
other through observing facial features. This is the so-called first impression. While external conditions
such as age, expression, light exposure, and a person's facial features change dramatically, but humans
can still recognize it. This phenomenon indicates that a lot of characteristic information exists in the
face of a person. By presenting features of a person's face, a person can be judged.
At present, the quality of devices with face recognition function is uneven, and some devices have
low efficiency and slow recognition process.
3. The Invention Content
Therefore, the invention provides a facial recognition system based on neural network to
overcome the problem of low efficiency of face recognition in the prior art.
To achieve the above purposes, the invention provides a facial recognition system based on a
neural network, including the following aspects:
Shell: The shell is used to protect its interior;
Camera: The camera is arranged on the upper part of the shell to scan the face in the detection
area;
Display: The display is arranged on the upper part of the shell. The display is located on one side
of the camera device and connected with the camera device to display the picture detected by the
camera device in real time and feedback the scanned face state to the user. The user adjusts the position
of his or her face according to the feedback information;
Central control module: The central control module is arranged inside the shell and connects to the
camera device, which transmits the scanned face information to the central control module;
Before using facial recognition system, users need to input multiple user photos of different angles
into the central control module. The central control module recognizes different photos of the same
user and obtains face static feature data through a built-in algorithm. Face static feature data is used as
standard image data sample. Each image data sample can generate the corresponding user's face image.
The data samples of multiple users are combined to generate data sample matrix group KO.
The camera collects face video information and delivers the video information to the central
control module. The central control module analyzes each frame of the video to obtain multiple frames
of face images to be recognized. Select the most suitable face image k.
The camera has the infrared detection function, which can detect the distance L between the
personnel and the camera device and transmit the detection results to the central control module. The
central control module compensates the face image k to according to the distance L.
The central control module projects the face image k' onto the plane and divides the face into n
regions in pixels. The central control module detects the brightness of all areas within the face image k,
adjusts the brightness and generates a new face image k.
The central control module compares the face image k with the user face image stored in the
matrix group KO and selects the two face images with the highest similarity for accurate comparison.
The central control module divides the face image k into eye image, nose image, mouth image and the
whole face image. The whole face image is divided into n regions. Eye image is divided into n2 regions.
Nose image is divided into n3 regions. The mouth image is divided into n4 regions. The central control
module compares all images of a single region with the corresponding parts of the selected two face
images, judges whether the single region is consistent, and calculates the anastomosis rate in each
partition image. Each partition image has different weight. The central control module calculates
whether the face image k passes by combining the anastomosis rate and weight in each partition
image.
When the face recognition system carries out face recognition, the camera collects face video
information and transmits the video information to the central control module, which analyzes each
frame of video to obtain multiple face images to be recognized; selects the most suitable face image k;
the face image is M1 in length and N1 in width
Camera has infrared detection function, which can detect distance L between personnel and
cameras and transfer the results to the central control module. The central control module compensates
the face image k to according to the distance L. The length of the compensated image is M, the width
is N, where M = M1 x L x m', N = N1 x L x n', m' is the length compensation parameter of the face
image, n' is the width compensation parameter of the face image.
The central control module projects the face image k' to the plane, establishes a rectangular
coordinate system with the tip of the nose as the origin, takes the transverse direction of the face as the
X axis, takes the longitudinal direction of the face as the Y axis, and divides the face into n regions in
pixels.
The central control module is equipped with a brightness matrix CO and a brightness adjustment
parameter matrix DO. For the brightness matrix CO, CO(C1,C2,C3,C4), where C1 is the first preset
brightness parameter, C2 is the second preset brightness parameter,C3 is the third preset brightness
parameter, C4 is the fourth preset brightness parameter, and the brightness parameters increase in order.
For the brightness adjustment parameter matrix DO, DO (D1,D2,D3,D4), where D1 is the first preset
brightness adjustment parameter,D2 is the second preset brightness adjustment parameterD3 is the
third preset brightness adjustment parameter, and D4 is the fourth preset brightness adjustment
parameter.
The central control module detects the brightness Ci of region i of the face image k'. Compare Ci with the internal parameters of the CO matrix:
When Ci C1, the central control module judges that the brightness of region i is insufficient and
selects D1 from the DO matrix as the brightness adjustment parameter.
When Cl < Ci C2, the central control module judges that the brightness of region i is
insufficient and selects D2 as the brightness adjustment parameter from the DO matrix.
When C2 < Ci C3, the central control module determines that the brightness of region i is in the
standard state.
When C3 < Ci C4, the central control module judges that the brightness of region i is too high
and selects D3 from the DO matrix as the brightness adjustment parameter.
When Ci >C4, the central control module judges that the region i of brightness is too high and
selects D4 from the DO matrix as the brightness adjustment parameter.
When the center control module judges that the brightness of region i is not in the standard state,
the center control module adjusts the brightness of region i to Ci'. When the center control module
judges that the brightness of region i is insufficient, Ci'=Ci+ (C3-Ci)xDj, j=1,2. When the middle
control module judges that the brightness of region i is too high, Ci'=Ci-(Ci-C2)xDr, r=3,4.
When the central control module adjusts the brightness of region i to Ci', the central control
module compares Ci' with the parameters in the CO matrix. When C2 < Ci' C3, the central control
module determines that the brightness of region i is in the standard state. When Ci' is not C2-C3,
repeat the above operation until C2 < Ci' C3.
The central control module adjusts all areas in face image k' and generates a new face image k.
The regional brightness of face image k is within the range of C2C3. The central control module
generates face image matrix group HO according to k.
The central control module compares the face image k with the user face image stored in the
matrix group KO.Select the two face images with the highest similarity for accurate comparison. The
two face images correspond to the data sample matrix group Ku and the data sample matrix group Kv.
Furtherly, the central control module projects the face image ku in the matrix group Ku to the
plane establishes a rectangular coordinate system with the tip of the nose as the origin, takes the
transverse direction of the face as the X axis, takes the longitudinal direction of the face as the Y axis,
and divides the face into n regions in pixels.
Ku(Kul,Ku2,Ku3......Kun), for Kul (aul,bul ),aul is the first endpoint where the u face contour in the Kul region intersects with the region, and bul is the second endpoint where the u face contour intersects with the region in Kul. Line segment kul is generated by connecting two points.
For Ku2 (au2,bu2) , au2 is the first endpoint where the u face contour in the Ku2 region intersects
with the region, and bu2 is the second endpoint where the u face contour intersects with the region in
the Ku2 region. Line segment ku2 is generated by connecting two points.
For Ku3 (au3,bu3) ,au3 is the first endpoint where the u face contour in the Ku3 region intersects
with the region, and bu3 is the second endpoint where the u face contour intersects with the region in
the Ku3 region. Line segment ku3 is generated by connecting two points.
For Kun (aun,bun) ,aun is the first endpoint where the u face contour in the Kun region intersects
with the region, and bun is the second endpoint where the u face contour intersects with the region in
the Kun region. Line segment kun is generated by connecting two points.
Furtherly, the central control module extracts data from face image k" and generates matrix kO (k1,
k2, k3, k4), where kl is the overall face image of face image k, k2 is the eye image of face image k,k3
is the nose image of face image k, and k4 is the mouth image of face image k.
For generating matrix groups of face images HO (H,H2,H3,......Hn), where H1-Hn in the region
of the ki, Ha-Hb in the region of the k2, Hc-Hd in the region of the k3 and He-Hf in the region of the
k4.
For Hi (p l,q ), pl is the first endpoint where the face contour of H intersects with the region in
Hi region, and ql is the second endpoint where the face contour of H intersects with the region in HI
region. Line segment hi is generated by connecting two points.
For H2 (p2,q2) , p2 is the first endpoint where the face contour of H intersects with the region in
H2 region, and q2 is the second endpoint where the face contour of H intersects with the region in H2
region. Line segment h2 is generated by connecting two points.
For H3 (p3,q3) , p3 is the first endpoint where the face contour of H intersects with the region in
H3 region, and q3 is the second endpoint where the face contour of H intersects with the region in H3
region. Line segment h3 is generated by connecting two points.
For Hn (pn,qn) , pn is the first endpoint where the face contour of H intersects with the region in
Hn region, and qn is the second endpoint where the face contour of H intersects with the region in Hn
region. Line segment hn is generated by connecting two points.
Furtherly, the central control module compares HO with Ku and generates a comparison matrix
MO (Ml,M2,M3..Mn). Ml(hl,kul) is defined as follows:
z-(hl, kul) M1(hl,kul) I (h1, kul) II (hi, kul)
/(h1,kul) represents the angle 01 of line segment hi and line segment kul in the preset
Cartesian coordinate system. I (hi, kul) represents the vertical distance al between line hl and line
kul in the preset Cartesian coordinate system. II (h,kul) represents the horizontal distance pl between line hl and line kul in the preset Cartesian coordinate system.
The central control module is provided with a formula for calculating the contrast parameter P and
the judgment value Q of a single region as follows: 3 Q=y012 + Ea13 + T1
y is the calculated weight of 01 against the judgment value Q. e is the calculation weight of al on the judgment value Q. r is the calculated weight of p1 on the judgment value Q. When Q < P, it is judged that the H 1 region is consistent with the Kul region image.
When Q DP, it is judged that the image of H1 region is not consistent with Kul region image.
Furtherly, the central control module compares all regions in the matrix group HO with the regions
in the matrix group Ku. ki region is divided into n regions, and there are m regions where the images
match. k2 region is divided into n2 regions, and there are m2 regions where the images match. k3
region is divided into n3 regions, and there are m3 regions where the images match. k4 region is
divided into n4 regions, and there are m4 regions where the images match.
The central control module calculates the regional anastomosis rate. The anastomosis rate of k
region is Sl=m. nl The anastomosis rate of k2 region is S 2 mn2 . The anastomosis rate ofk3 region is
The anastomosis rate of k4 region is S4= -.
The central control module calculates the coincidence rate Sz between the face image k" and the
face image ku:
Sz= VS1x z1+ S2 x z2+ S3 x z3+ S4 x z4
Where zl is the weight parameter of Si to Sz, z2 is the weight parameter of S2 to Sz, z3 is the
weight parameter of S3 to Sz, z4 is the weight parameter of S4 to Sz.
The central control module also has a parameter S of the coincidence rate between the face image
k" and the face image ku. The central control module compares Sz with S as follows:
When Sz > S, the central control module judges that face image k" and face image ku are the same
user's face.
When Sz S, the central control module judges that face image k" and face image ku are different
faces.
When the center control module judges that face image k" and face image ku are the same user
face. According to the preset algorithm, the central control module makes the face image k" obtain the
static feature data of the face. And face static feature data is incorporated into ku data samples as
standard images.
When the center control module judges that the face image k" and the face image ku are different
faces. It projects the face image kv in matrix group Kv to the plane, and repeats the above operation to
recognize the face image kv.
When the center control module judges that the face image k", the face image ku and the face
image kv are different faces. The center control module judges that the face can't pass.
Compared with the prior art, the invention has a beneficial effect in the process of using the facial
recognition system. First of all, when multiple photos from different angles of users are input into the
central control module, the central control module will identify different photos of the same user. Face
static feature data is obtained by built-in algorithm, which is taken as standard image data sample. Each
image data sample can generate the corresponding user's face image. The data samples of multiple
users were combined to generate the data sample matrix group KO. Through the data recognition in the
early stage, the human facial features can be obtained comprehensively and accelerate the speed of face
recognition.
Furtherly, the camera collects face video information and delivers the video information to the
central control module. The central control module analyzes each frame of the video to obtain multiple
frames of face images to be recognized. Select the most suitable face image k. The camera has the
infrared detection function, which can detect the distance L between the personnel and the camera
device, transmit the detection results to the central control module. The central control module
compensates the face image k to k' according to the distance L. The central control module projects the
face image k' onto the plane and divides the face into n regions in pixels. The central control module
detects the brightness of all areas within the face image k', adjusts the brightness and generates a new
face image k". By selecting and compensating the collected information, the recognition speed is further accelerated. Meanwhile, infrared detection devices can perform biometric identification.
Preventing people from using photos or sculptures to identify.
Furtherly, the central control module compares the face image k with the user face image stored
in the matrix group KO. The two face images with the highest similarity were selected for accurate
comparison. The central control module divides the face image k into eye image, nose image, mouth
image and the whole face image. The whole face image is divided into n regions. Eye image is divided
into n2 regions. Nose image is divided into n3 regions. The mouth image is divided into n4 regions.
The central control module will compare all the single area images with the corresponding parts of the
two face images. Judge whether a single region is consistent and calculate the coincidence rate in each
partition image. Each partition has a different weight. The central control module calculates whether
the face image k passes by combining the anastomosis rate and weight in each partition image. By
means of calculus and weight distribution, the accuracy of recognition is increased.
Furtherly, when face recognition is passed, the recognition image is incorporated into the data
sample. Timely supplement the samples to increase the number of samples. The identification ability of
the identification system is strengthened and the identification speed is further accelerated.
4. Instruction With Pictures
Fig. 1 is a structural schematic diagram of a facial recognition system based on neural network.
5. Specific Implementation Mode
In order to make the purpose and advantages of the invention more clearly understood, the present
invention will be further described in combination with examples. The specific embodiments described
herein are used only to explain the invention ,but it is not intended to define the invention.
The preferred embodiment of the invention is described with reference to the attached drawings.
Technicians in the field shall understand that these embodiments are used only to explain the technical
principle of the invention. It is not limiting the scope of protection of the invention.
It should be noted that, in the description of the invention, the terms "up", "down", "left", "right",
"inside", "out" and so on, indicating directions or position relations are based on the directions or
position relations shown in the attached drawings. This is just for the sake of description, rather than
indicating or suggesting that the device or element must have or be constructed and operated in a
particular orientation. And therefore it cannot be construed as a limitation of the present invention.
It is further noted that in the description of the invention, the terms "installation", "connection" and "connection" are to be understood broadly, unless it is otherwise expressly prescribed and qualified.
For example, it can be a fixed connection, can also be a detachable connection, or an integral
connection. It can be a mechanical connection or an electrical connection. It can be directly connected,
it can be indirectly connected through an intermediary, it can be the internal connection of two
components. For the technical personnel in the field, the specific meaning of the above terms in the
invention may be understood according to the specific situation.
The structural schematic diagram of the facial recognition system based on the neural network is
shown in Fig. 1. The facial recognition system based on neural network comprises the shell 1, the
camera device 2, the display screen 3 and the central control module 4. Shell 1 is used to protect its
internally loaded components. The camera device 2 is arranged on the upper part of the shell 1 to scan
the face in the detection area. The display 3 is arranged on the upper part of the shell. The display is
located on one side of the camera device 2 and connected with the camera device 2 to display the
picture detected by the camera device in real time and feedback the scanned face state to the user.
Users adjust the position of their face based on the feedback. The central control module 4 is arranged
inside the shell 1 and connected with the camera device 2. The camera device 2 transmits the scanned
face information to the central control module 4.
Before using facial recognition system, users need to input multiple user photos of different angles
into the central control module 4. The central control module 4 recognizes different photos of the same
user and obtains face static feature data through a built-in algorithm. Face static feature data is used as
standard image data sample. Each image data sample can generate the corresponding user's face image.
The data samples of multiple users are combined to generate data sample matrix group KO.
The camera 2 collects face video information and delivers the video information to the central
control module 4. The central control module 4 analyzes each frame of the video to obtain multiple
frames of face images to be recognized. Select the most suitable face image k.
The camera 2 has the infrared detection function, which can detect the distance L between the
personnel and the camera device 2. The detection results are transmitted to central control module 4.
The central control module 4 compensates the face image k to k' according to the distance L.
The central control module 4 projects the face image k' onto the plane and divides the face into n
regions in pixels. The central control module 4 detects the brightness of all areas within the face image
k', adjusts the brightness and generates a new face image k".
The central control module 4 compares the face image k with the user face image stored in the
matrix group KO and selects the two face images with the highest similarity for accurate comparison.
The central control module 4 divides the face image k into eye image, nose image, mouth image and
the whole face image. The whole face image is divided into n regions. Eye image is divided into n2
regions. Nose image is divided into n3 regions. The mouth image is divided into n4 regions. The
central control module 4 compares all images of a single region with the corresponding parts of the
selected two face images, judges whether the single region is consistent, and calculates the anastomosis
rate in each partition image. Each partition image has different weight. The central control module 4
combines the anastomosis rate and weight in the divided images to calculate whether the face image k
passes. When face recognition passes, the recognition image is incorporated into the data sample.
Specifically, when the face recognition system carries out face recognition, the camera device 2
collects face video information and transmits the video information to the central control module 4,
which analyzes each frame of video to obtain multiple face images to be recognized. Selects the most
suitable face image k, the face image is M1 in length and N1 in width.
The camera device 2 has infrared detection function ,which can detect distance L between
personnel and the camera device 2. The results will be transmitted to central control module 4. The
central control module 4 compensates the face image k to according to the distance L. The length of
the compensated image is M, the width is N, where M = M1 x L x m', N = N1 x L x n'. m' is the
length compensation parameter of the face image. n' is the width compensation parameter of the face
image.
The central control module 4 projects the face image k' to the plane, establishes a rectangular
coordinate system with the tip of the nose as the origin, takes the transverse direction of the face as the
X axis, takes the longitudinal direction of the face as the Y axis, and divides the face into n regions in
pixels.
Specifically, the central control module 4 is equipped with a brightness matrix CO and a brightness
adjustment parameter matrix DO.
For the brightness matrix CO, CO(C1,C2,C3,C4), where C1 is the first preset brightness parameter,
C2 is the second preset brightness parameterC3 is the third preset brightness parameter,C4 is the fourth
preset brightness parameter, and the brightness parameters increase in order.
For the brightness adjustment parameter matrix DO, DO (D1,D2,D3,D4), where D1 is the first preset brightness adjustment parameter, D2 is the second preset brightness adjustment parameter, D3 is the third preset brightness adjustment parameter, and D4 is the fourth preset brightness adjustment parameter.
The central control module detects the brightness Ci of region i of the face image k'. Compare Ci
with the internal parameters of the CO matrix:
When CiGC1, the central control module 4 judges that the brightness of region i is insufficient
and selects D1 from the DO matrix as the brightness adjustment parameter.
When Cl < Ci C2, the central control module 4 judges that the brightness of region i is
insufficient and selects D2 as the brightness adjustment parameter from the DO matrix.
When C2 < CiC3, the central control module 4 judges that the brightness of region i is in the
standard state.
When C3 < Ci <C4, the central control module 4 judges that the brightness of region i is too high
and selects D3 from the DO matrix as the brightness adjustment parameter.
When Ci >C4, the central control module 4 judges that the region i of brightness is too high and
selects D4 from the DO matrix as the brightness adjustment parameter.
When the center control module 4 judges that the brightness of region i is not in the standard state,
the center control module 4 will adjust the brightness of region i to Ci'. When the center control
module 4 judges that the brightness of region i is insufficient, Ci' =Ci+ (C3-Ci)xDj, j=1,2. When the
middle control module 4 judges that the brightness of region i is too high, Ci'=Ci-(Ci- C2)xDr, r=3,4.
When the center control module 4 adjusts the brightness of region i to Ci', the central control
module compares Ci' with the parameters in the CO matrix. When C2 < Ci' C3, the central control
module 4 judges that the brightness of region i is in the standard state. When Ci' is not between C2 and
C3, repeat the above operation until C2 < Ci'GC3.
The central control module 4 adjusts all areas in the face image k' and generates a new face image
k". The regional brightness of the face image k" is within the range of C2-C3. The central control
module 4 generates face image matrix group HO according to k".
Specifically, the central control module 4 compares the face image k" with the face image of the
user stored in the matrix group KO. The two face images with the highest similarity were selected for
accurate comparison. The two face images correspond to the data sample matrix group Ku and the data
sample matrix group Kv.
The central control module 4 projects the face image ku in the matrix group Ku to the plane,
establishes a rectangular coordinate system with the tip of the nose as the origin, takes the transverse
direction of the face as the X axis, takes the longitudinal direction of the face as the Y axis, and divides
the face into n regions in pixels.
Ku(Kul,Ku2,Ku3......Kun), for Kul (aul,bul), aul is the first endpoint where the u face contour
in the Kul region intersects with the region, and bul is the second endpoint where the u face contour
intersects with the region in Kul. Line segment kul is generated by connecting two points.
For Ku2 (au2,bu2) , au2 is the first endpoint where the u face contour in the Ku2 region intersects
with the region, and bu2 is the second endpoint where the u face contour intersects with the region in
the Ku2 region. Line segment ku2 is generated by connecting two points.
For Ku3 (au3,bu3) , au3 is the first endpoint where the u face contour in the Ku3 region intersects
with the region, and bu3 is the second endpoint where the u face contour intersects with the region in
the Ku3 region. Line segment ku3 is generated by connecting two points.
For Kun (aun,bun) , aun is the first endpoint where the u face contour in the Kun region intersects
with the region, and bun is the second endpoint where the u face contour intersects with the region in
the Kun region. Line segment kun is generated by connecting two points.
Specifically, the central control module 4 extracts data from face image k and generates matrix
kO (kl, k2, k3, k4), where kl is the overall face image of the face image k, k2 is the eye image of the
face image k, k3 is the nose image of the face image k and k4 is the mouth image of the face image.
For generating matrix groups of face images HO (H,H2,H3,......Hn), H1-Hn in the region of the
kl, Ha-Hb in the region of the k2, Hc-Hd in the region of the k3 and He-Hf in the region of the k4.
For Hi (p l,ql) , p 1is the first endpoint where the face contour of H intersects with the region in
Hi region, and q1 is the second endpoint where the face contour of H intersects with the region in H
region. Line segment hl is generated by connecting two points.
For H2 (p2,q2) , p2 is the first endpoint where the face contour of H intersects with the region in
H2 region, and q2 is the second endpoint where the face contour of H intersects with the region in H2
region. Line segment h2 is generated by connecting two points.
For H3 (p3,q3) , p3 is the first endpoint where the face contour of H intersects with the region in
H3 region, and q3 is the second endpoint where the face contour of H intersects with the region in H3
region. Line segment h3 is generated by connecting two points.
For Hn (pn,qn) , pn is the first endpoint where the face contour of H intersects with the region in
Hn region, and qn is the second endpoint where the face contour of H intersects with the region in Hn
region. Line segment hn is generated by connecting two points.
Specifically, the central control module 4 compares HO with Ku and generates a comparison
matrix MO (M1,M2,M3..Mn) . M1(hl,kul) is defined as follows:
L (hi, kul) M1(h1, ku) =1 (h1, kul) ||(h1, kul)
t(h1,ku1) represents the angle 81 of line segment hi and line segment kul in the preset
Cartesian coordinate system. I (h1, kul) represents the vertical distance al between line h and line
kul in the preset Cartesian coordinate system. II (h,kul) represents the horizontal distance 31
between line hi and line kul in the preset Cartesian coordinate system.
The central control module 4 is provided with a formula for calculating the contrast parameter P
and the judgment value Q of a single region:
Q=Fyo12 + eaI3 + p13
Where y is the calculated weight of 81 to the judgment value Q. e is the calculation weight of
a1 on the judgment value Q. f is the calculated weight of P1 on the judgment value Q. When Q < P, it is judged that the H1 region is consistent with the Kul region image.
When Q DP, it is judged that the image of H1 region is not consistent with Kul region image.
Specifically, the central control module 4 compares all regions in the matrix group HO with the
regions in the matrix group Ku. k1 region is divided into n regions, and there are m regions where the
images match. k2 region is divided into n2 regions, and there are m2 regions where the images match.
k3 region is divided into n3 regions, and there are m3 regions where the images match. k4 region is
divided into n4 regions, and there are m4 regions where the images match.
The central control module 4 calculates the regional anastomosis rate. The anastomosis rate of k1
region is Sl=, The anastomosis rate of k2 region is S2= ,The anastomosis rate of k3 region is n n2
S3= , The anastomosis rate of k4 region is S4=. n3 n4
The central control module 4 calculates the coincidence rate Sz between the face image k and the
face image ku:
Sz= v'S1 x z1 + S2 x z2 + S3 x z3 + S4 x z4
Where zl is the weight parameter of Si to Sz, z2 is the weight parameter of S2 to Sz, z3 is the
weight parameter of S3 to Sz, z4 is the weight parameter of S4 to Sz.
The central control module 4 is also provided with a coincidence rate parameter S of the face
image k and the face image ku. The central control module 4 compares Sz with S as follows:
When Sz > S, the central control module 4 judges that face image k and face image ku are the
same user's face.
When Sz S, the central control module 4 judges that face image k and face image ku are
different faces.
Specifically, When the center control module 4 judges that face image k and face image ku are
the same user face. According to the preset algorithm, the central control module 4 makes the face
image k obtain the static feature data of the face. And face static feature data is incorporated into ku
data samples as standard images.
When the center control module 4 judges that the face image k and the face image ku are
different faces. It projects the face image kv in matrix group Kv to the plane, and repeats the above
operation to recognize the face image kv.
Specifically, When the center control module 4 judges that the face image k , the face image ku
and the face image kv are different faces. The center control module 4 judges that the face can't pass.
Thus, the technical solution of the invention has been described in combination with the preferred
embodiment which is shown in the attached drawings. However, it is easy for technicians in this field
to understand that the protection scope of the invention is obviously not limited to these specific ways.
Without deviating from the principle of the invention, the technical personnel in the field may make
equivalent changes or substitutions to the relevant technical features. The technical solutions resulting
from such modifications or substitutions shall fall within the scope of protection of the invention.

Claims (9)

The claims defining the invention are as follows:
1. A facial recognition system based on neural network is characterized by:
Shell: The shell is used to protect its interior.
Camera: The camera is arranged on the upper part of the shell to scan the face in the detection
area.
Display: The display is arranged on the upper part of the shell. The display is located on one side
of the camera device and connected with the camera device to display the picture detected by the
camera device in real time and feedback the scanned face state to the user. The user adjusts the position
of his or her face according to the feedback information.
Central control module: The central control module is arranged inside the shell and connects to the
camera device, which transmits the scanned face information to the central control module.
Before using facial recognition system, users need to input multiple user photos of different angles
into the central control module. The central control module recognizes different photos of the same
user and obtains face static feature data through a built-in algorithm. Face static feature data is used as
standard image data sample. Each image data sample can generate the corresponding user's face image.
The data samples of multiple users are combined to generate data sample matrix group KO.
The camera collects face video information and delivers the video information to the central
control module. The central control module analyzes each frame of the video to obtain multiple frames
of face images to be recognized. Select the most suitable face image k.
The camera has the infrared detection function, which can detect the distance L between the
personnel and the camera device. Transmit the detection results to the central control module. The
central control module compensates the face image k to according to the distance L.
The central control module projects the face image k' onto the plane and divides the face into n
regions in pixels. The central control module detects the brightness of all areas within the face image k,
adjusts the brightness and generates a new face image k.
The central control module compares the face image k with the user face image stored in the
matrix group KO and selects the two face images with the highest similarity for accurate comparison.
The central control module divides the face image k into eye image, nose image, mouth image and the
whole face image. The whole face image is divided into n regions. Eye image is divided into n2 regions.
Nose image is divided into n3 regions. The mouth image is divided into n4 regions. The central control module compares all images of a single region with the corresponding parts of the selected two face images, judges whether the single region is consistent, and calculates the anastomosis rate in each partition image. Each partition image has different weight. The central control module calculates whether the face image k passes by combining the anastomosis rate and weight in each partition image.
When face recognition passes, the recognition image is incorporated into the data sample.
2. According to claim 1, the facial recognition system based on neural network has the following
features:
When facial recognition system is carried out, the camera device collects the video information of
the face and transmits the video information to the central control module. The central control module
analyzes each frame of the video and obtains multiple frames of face images to be recognized. Select
the most suitable face image k. The face image is M1 in length and N1 in width.
Camera has infrared detection function which can detect distance L between personnel and
cameras. It transfers the results to the central control module. The central control module compensates
the face image k to according to the distance L. The length of the compensated image is M, the width
is N, where M = M1 x L x m', N = N1 x L x n', m' is the length compensation parameter of the face
image, n' is the width compensation parameter of the face image.
The central control module projects the face image k' to the plane, establishes a rectangular
coordinate system with the tip of the nose as the origin, takes the transverse direction of the face as the
X axis, takes the longitudinal direction of the face as the Y axis, and divides the face into n regions in
pixels.
3. According to the facial recognition system based on neural network mentioned in Claim 2, it
has the following features:
The central control module is equipped with a brightness matrix CO and a brightness adjustment
parameter matrix DO. For the brightness matrix CO, CO(C1,C2,C3,C4), where C1 is the first preset
brightness parameter, C2 is the second preset brightness parameter, C3 is the third preset brightness
parameter, C4 is the fourth preset brightness parameter, and the brightness parameters increase in order.
For the brightness adjustment parameter matrix DO, DO (D1,D2,D3,D4), where D1 is the first preset
brightness adjustment parameter, D2 is the second preset brightness adjustment parameter, D3 is the
third preset brightness adjustment parameter, and D4 is the fourth preset brightness adjustment
parameter.
The central control module detects the brightness Ci of region i of the face image k. Compare Ci
with the internal parameters of the CO matrix:
When Ci C1, the central control module judges that the brightness of region i is insufficient and
selects D1 from the DO matrix as the brightness adjustment parameter.
When C1 < Ci C2, the central control module judges that the brightness of region i is
insufficient and selects D2 as the brightness adjustment parameter from the DO matrix.
When C2 < Ci C3, the central control module judges that the brightness of region i is in the
standard state.
When C3 < Ci C4, the central control module judges that the brightness of region i is too high
and selects D3 from the DO matrix as the brightness adjustment parameter.
When Ci >C4, the central control module judges that the region i of brightness is too high and
selects D4 from the DO matrix as the brightness adjustment parameter.
When the center control module judges that the brightness of region i is not in the standard state,
the center control module adjusts the brightness of region i to Ci'. When the center control module
judges that the brightness of region i is insufficient, Ci'=Ci+ (C3-Ci)xDj, j=1,2. When the middle
control module judges that the brightness of region i is too high, Ci'=Ci-(Ci-C2)xDr, r=3,4.
When the central control module adjusts the brightness of region i to Ci', the central control
module compares Ci' with the parameters in the CO matrix. When C2 < Ci' C3, the central control
module judges that the brightness of region i is in the standard state. When Ci' is not C2-C3, repeat the
above operation until C2 < Ci' <C3.
The central control module adjusts all areas in face image k' and generates a new face image k.
The regional brightness of face image k is within the range of C2C3. The central control module
generates face image matrix group HO according to k.
4. According to claim 3, the facial recognition system based on neural network has the following
features:
The central control module compares the face image k with the user face image stored in the
matrix group KO. Select the two face images with the highest similarity for accurate comparison. The
two face images correspond to the data sample matrix group Ku and the data sample matrix group Kv.
The central control module projects the face image ku in the matrix group Ku to the plane,
establishes a rectangular coordinate system with the tip of the nose as the origin, takes the transverse direction of the face as the X axis, takes the longitudinal direction of the face as the Y axis, and divides the face into n regions in pixels.
Ku(Kul, Ku2, Ku3......Kun), for Kul (aul, bul), aul is the first endpoint where the u face
contour in the Kul region intersects with the region, and bul is the second endpoint where the u face
contour intersects with the region in Kul. Line segment kul is generated by connecting two points.
For Ku2 (au2, bu2), au2 is the first endpoint where the u face contour in the Ku2 region intersects
with the region, and bu2 is the second endpoint where the u face contour intersects with the region in
the Ku2 region. Line segment ku2 is generated by connecting two points.
For Ku3 (au3, bu3), au3 is the first endpoint where the u face contour in the Ku3 region intersects
with the region, and bu3 is the second endpoint where the u face contour intersects with the region in
the Ku3 region. Line segment ku3 is generated by connecting two points.
For Kun (aun, bun), aun is the first endpoint where the u face contour in the Kun region intersects
with the region, and bun is the second endpoint where the u face contour intersects with the region in
the Kun region. Line segment kun is generated by connecting two points.
5. According to the facial recognition system based on neural network mentioned in Claim 4, it
has the following features:
The central control module extracts data from face image k and generates matrix kO (ki, k2, k3,
k4), where kl is the overall face image of the face image k, k2 is the eye image of the face image k, k3
is the nose image of the face image k and k4 is the mouth image of the face image.
For generating matrix groups of face images HO (Hi, H2, H3,......Hn), where H1-Hn are in the
region of the kl, Ha-Hb are in the region of the k2, Hc-Hd are in the region of thek3 and He-Hf are
in the region of the k4.
For Hi (p1, ql) , p 1 is the first endpoint where the face contour of H intersects with the region in
Hi region, and q1 is the second endpoint where the face contour of H intersects with the region in H
region. Line segment hl is generated by connecting two points.
For H2 (p2, q2) , p2 is the first endpoint where the face contour of H intersects with the region in
H2 region, and q2 is the second endpoint where the face contour of H intersects with the region in H2
region. Line segment h2 is generated by connecting two points.
For H3 (p3, q3) , p3 is the first endpoint where the face contour of H intersects with the region in
H3 region, and q3 is the second endpoint where the face contour of H intersects with the region in H3 region. Line segment h3 is generated by connecting two points.
For Hn (pn, qn) , pn is the first endpoint where the face contour of H intersects with the region in
Hn region, and qn is the second endpoint where the face contour of H intersects with the region in Hn
region. Line segment hn is generated by connecting two points.
6. According to claim 5, the facial recognition system based on neural network has the following
features:
The central control module compares HO with Ku and generates a comparison matrix MO
(M1,M2,M3..Mn) . M1(hl,kul) is defined as follows:
z (h1, kul) Mi(h,kul) = (hi, kul) ||(hi, kul)
L(hi,kui) represents the angle 81 of line segment hi and line segment kul in the preset
Cartesian coordinate system. I (h1, kul) represents the vertical distance al between line h and line
kul in the preset Cartesian coordinate system. II (h,ku1) represents the horizontal distance 31
between line hi and line kul in the preset Cartesian coordinate system.
The central control module is provided with a formula for calculating the contrast parameter P and
the judgment value Q of a single region: 3 Q= y12 + ea13 + lp1
y is the calculated weight of 01 against the judgment value Q. e is the calculation weight of al
on the judgment value Q. f is the calculated weight of P1 on the judgment value Q.
When Q < P, it is judged that the H1 region is consistent with the Kul region image.
When Q DP, it is judged that the image of H region is not consistent with the Kul region image.
7. According to the facial recognition system based on neural network mentioned in Claim 6, it
has the following features:
The central control module compares all regions in the matrix group HO with the regions in the
matrix group Ku. ki region is divided into n regions, and there arem regions where the images match.
k2 region is divided into n2 regions, and there are m2 regions where the images match. k3 region is
divided into n3 regions, and there are m3 regions where the images match. k4 region is divided into n4
regions, and there are m4 regions where the images match.
The central control module calculates the regional anastomosis rate. The anastomosis rate of k region is Sl=m. The anastomosis rate of k2 region is S2= Theanastomosisrateofk3regionis n n
S3=-.The anastomosis rate of k4 region is S4= -. n3 n4
The central control module calculates the coincidence rate Sz between the face image " and the
face image ku:
Sz= VS1x z1+ S2 x z2+ S3 x z3+ S4 x z4
where z Iis the weight parameter of Si to Sz, z2 is the weight parameter of S2 to Sz, z3 is the weight
parameter of S3 to Sz, z4 is the weight parameter of S4 to Sz.
The central control module also has a parameter S of the coincidence rate between the face image
k" and the face image ku. The central control module compares Sz with S as follows:
When Sz > S, the central control module judges that face image k" and face image ku are the same
user's face.
When Sz S, the central control module judges that face image k" and face image ku are different
faces.
8. According to claim 7, the facial recognition system based on neural network has the following
features:
When the center control module judges that face image k" and face image ku are the same user
face, the central control module obtains the face static feature data according to the preset algorithm.
The face static feature data is incorporated into the Ku data sample as a standard image.
When the center control module judges that the face image k" and the face image ku are different
faces. It projects the face image kv in matrix group Kv to the plane, and repeats the above operation to
recognize the face image kv.
9. According to claim 8, the facial recognition system based on neural network has the following
features:
When the center control module judges that the face image k", the face image ku and the face
image kv are different faces. The center control module judges that the face can't pass.
AU2021101197A 2021-03-07 2021-03-07 Facial recognition system based on neural network Ceased AU2021101197A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021101197A AU2021101197A4 (en) 2021-03-07 2021-03-07 Facial recognition system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021101197A AU2021101197A4 (en) 2021-03-07 2021-03-07 Facial recognition system based on neural network

Publications (1)

Publication Number Publication Date
AU2021101197A4 true AU2021101197A4 (en) 2021-05-06

Family

ID=75714345

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021101197A Ceased AU2021101197A4 (en) 2021-03-07 2021-03-07 Facial recognition system based on neural network

Country Status (1)

Country Link
AU (1) AU2021101197A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807264A (en) * 2021-09-18 2021-12-17 北京市商汤科技开发有限公司 Task demonstration method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807264A (en) * 2021-09-18 2021-12-17 北京市商汤科技开发有限公司 Task demonstration method and device, electronic equipment and storage medium
CN113807264B (en) * 2021-09-18 2024-03-26 北京市商汤科技开发有限公司 Task demonstration method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111797677B (en) Face recognition living body detection method based on face iris recognition and thermal imaging technology
CN108470169A (en) Face identification system and method
JP3938257B2 (en) Method and apparatus for detecting a face-like area and observer tracking display
CN108427503A (en) Human eye method for tracing and human eye follow-up mechanism
CN104992141B (en) Smart biological feature monitoring assembly and method based on double-iris, stereoscopic human face and vocal print recognition
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
GB2343945A (en) Photographing or recognising a face
WO2004070563A2 (en) Three-dimensional ear biometrics system and method
CN110309782A (en) It is a kind of based on infrared with visible light biocular systems living body faces detection methods
Reese et al. A comparison of face detection algorithms in visible and thermal spectrums
AU2021101197A4 (en) Facial recognition system based on neural network
CN110363768A (en) A kind of early carcinoma lesion horizon prediction auxiliary system based on deep learning
CN106599657A (en) Dynamic detection and feedback method used for bio-feature identification of mobile terminal
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
JPWO2018078857A1 (en) Gaze estimation apparatus, gaze estimation method, and program recording medium
CN208351494U (en) Face identification system
WO2013151205A1 (en) Method and apparatus for acquiring image of face for facial recognition
CN106725341A (en) A kind of enhanced lingual diagnosis system
Hossain et al. Facial emotion verification by infrared image
JP2014064083A (en) Monitoring device and method
KR100795360B1 (en) A Method Of Face Recognizing
CN109670473A (en) Preferred method and device based on face grabgraf
CN112418060B (en) Facial recognition system based on neural network
CN110070062A (en) A kind of system and method for the recognition of face based on binocular active infrared
CN109063674A (en) A kind of living iris detection method and detection device based on hot spot on eyeball

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry