CN110852239B - Face recognition system - Google Patents

Face recognition system Download PDF

Info

Publication number
CN110852239B
CN110852239B CN201911075118.XA CN201911075118A CN110852239B CN 110852239 B CN110852239 B CN 110852239B CN 201911075118 A CN201911075118 A CN 201911075118A CN 110852239 B CN110852239 B CN 110852239B
Authority
CN
China
Prior art keywords
face
face image
format
image
dib
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911075118.XA
Other languages
Chinese (zh)
Other versions
CN110852239A (en
Inventor
郭婧
朱咸军
刘尉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of Technology filed Critical Jinling Institute of Technology
Priority to CN201911075118.XA priority Critical patent/CN110852239B/en
Publication of CN110852239A publication Critical patent/CN110852239A/en
Application granted granted Critical
Publication of CN110852239B publication Critical patent/CN110852239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention discloses a face recognition system, which comprises a face reading module, a face processing module, a face recognition module, a face modeling module, a face database and an external camera, wherein the face reading module is used for reading a face image; the method comprises the following steps of firstly carrying out image conversion and skin color modeling, finally carrying out feature extraction and recognition, constructing a basic frame of face recognition, and determining whether a given input image containing a face is a person in a known face library or not by matching and comparing the input image with the known face library.

Description

Face recognition system
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition system.
Background
The Automatic Face Recognition Technology (AFRT) is a biological feature recognition technology, and has attracted the interest of more and more researchers under the promotion of strong market demands of social security, trade and finance and the like by the characteristics of directness, convenience and friendliness which are not possessed by other ecology, and becomes a hotspot of the research in the field of current image engineering. Although human faces are a non-rigid body with complex patterns and human beings can recognize faces and expressions thereof without difficulty, automatic machine recognition of faces is a very difficult subject.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a face recognition system, which comprises a face reading module, a face processing module, a face recognition module, a face modeling module, a face database and an external camera;
the human face reading module comprises an image transmission belt, the human face reading module reads a human face image through the external camera, the read human face image is in a BMP format, the human face image is transmitted to the image transmission belt after reading is finished, and the image transmission belt is used as a data storage buffer of the human face reading module and is used for converting the human face image in the BMP format into a human face image in a DIB format;
the face processing module is responsible for processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, and sending the processed face image to the face modeling module;
the human face modeling module is used for modeling according to the color of the human face skin in each human face image in the DIB format after nonlinear conversion, eliminating noise of the modeled human face image and storing the noise into a human face database;
the face recognition module is used for extracting and recognizing the features of the face images in the face database.
In the invention, the image transmission band comprises M storage blocks, the storage blocks are connected by using Hash pointers, and the Hash pointer connection is one-way connection with a sequence; the storage blocks comprise a data storage area and a check area, only one BMP format image is placed in the data storage area of each storage block, the check area of each storage block comprises a check number, the check number in one storage block is used for checking all storage blocks which are arranged in front of the storage block and are connected in a unidirectional mode by using a hash pointer, namely the check number is used for checking whether face images in the BMP format are stored in all storage blocks which are arranged in front of the storage block and are connected in a unidirectional mode by using the hash pointer or not, checking whether the check numbers in the check areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode are all 1 or not, if all the BMP format images are stored in the data storage areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode and the check numbers in all the check areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode are all 1 or not, storing a check number with the value of 1 in a check area of the storage block, wherein the initial value of the check number is 0; the first generated parity number in the parity area of the storage block is 1 by default.
In the invention, the face reading module takes the time of reading the face image in the BMP format as a time stamp and adds the time stamp to the storage blocks for storing the face image in the BMP format, and the face image in the BMP format is respectively stored in 1-M storage blocks according to the sequence of the time stamp, namely, the face image with the earliest reading time is stored in a first storage block and then is stored in the storage blocks one by one according to the sequence of the reading time.
The face reading module starts to check M-1 storage blocks before the new storage block through the check number, whether the check number in the M-1 storage blocks before the check is 1 or not is judged, and only when the check number is 1, the face image in the BMP format in the data storage area in the first storage block in the image transmission band is taken out at a fixed frequency, converted into the face image in the DIB format and sent to the face processing module, then the face reading module destroys the first storage block, adds a new storage block at the tail part of the image transmission band, and puts the face image read by the external camera into the data storage area in the new storage block. The face reading module processes the face image in the DIB format by the face processing module only when the check number is 1; wherein M is a positive integer greater than 1, and the value range of M is 1 to 255; the fixed frequency can be freely set by the user, for example, T images in BMP format in the first memory block are taken every minute, where T is a natural number from 1 to 100.
In the invention, the face processing module is responsible for processing the face image in DIB format to obtain the face image in DIB format after nonlinear conversion, and sending the processed face image to the face modeling module, and the specific steps comprise:
a1, the face processing module performs light compensation on the DIB-format face image, that is, arranging pixels of the DIB-format face image from high to low, extracting pixels of the DIB-format face image arranged at the first X%, and linearly amplifying the pixels of the DIB-format face image arranged at the first X% until the average pixel of the DIB-format face image reaches 255; a distributed unit is arranged in the face processing module, and is used for storing the pixels of the face image in the DIB format which is arranged in the front X% before linear amplification; x is a real number between 3 and 8;
a2, the face processing module performs nonlinear conversion on the DIB-format face image processed in the step a1 to eliminate the dependence of color values on brightness values in the DIB-format image, in the process of nonlinear conversion, conversion regions are divided on the DIB-format face image, each conversion region is an area range divided on the DIB-format face image, so that the areas of the conversion regions are equal and are not overlapped with each other, each conversion region is subjected to nonlinear conversion until all conversion regions are subjected to nonlinear conversion, and a nonlinear-converted DIB-format face image is obtained; wherein the conversion area is a regular square or rectangle. The face processing module equally divides the face image in the DIB format into 16 to 25 conversion regions according to the area size of the face image in the DIB format.
The distributed unit comprises more than two distributed nodes which can form a distributed node group again, wherein the number of the distributed nodes in one distributed node group is not less than M, each distributed node stores pixels of amplified face images arranged in the first X% DIB format each time, a time period T is set, a clock is used for timing, the starting point of timing is 0, the pixels of amplified face images arranged in the first X% DIB format each time are stored in the distributed nodes, the pixels are stored in the other distributed node after the storage is finished, and when the time of the clock is T, the distributed nodes stored from the time 0 to the time T form one distributed node group;
in the next time period T, pixels of the face image which is enlarged each time and is arranged in the first X% DIB format are stored in the distributed nodes which are not used, and the stored distributed nodes form a new distributed node group; the process of storing in the distributed nodes is repeated circularly by taking a time period T as a period, and the clock is cleared before each period; and in each distributed node group, a master node is assigned, and the master node only stores the time when each distributed node group starts to store and the time when the storage ends.
In the invention, the face modeling module models according to the face skin color in each face image in the DIB format after nonlinear conversion, eliminates noise of the modeled face image, and then stores the face image into a face database, and the specific steps comprise:
the human face modeling module is used for modeling according to the human face skin color in each human face image in the DIB format after nonlinear conversion, the human face image in the DIB format after nonlinear conversion is coded before modeling, the coded form is that the time for starting storage stored in a main node in a distributed node group where the human face image in the DIB format is located is followed by the time for finishing storage, on the basis, the number of the human face image in the DIB format after nonlinear conversion is followed by the number of a human face image in the DIB format after nonlinear conversion, the number is a positive integer larger than 0, the human face image in the DIB format is sequentially increased from 1, and the code is expressed into a binary number as a formal code; the modeling process of the face modeling module comprises the following steps: converting the face image from an RGB space with high color component correlation to a YCbCr color space with low color component correlation, then carrying out piecewise linear color transformation on the face image, and finally projecting the face image to a two-dimensional subspace to obtain a skin color clustering model, thereby obtaining the modeled face image; finally, putting the formal codes into P data sections, wherein P is an integer larger than 1, each data section is provided with a main key, and the main keys are in one-to-one correspondence with the formal codes;
the modeling process of the face modeling module is to convert from an RGB space with high color component correlation to a YCbCr color space with low color component correlation, and the adopted formula is as follows:
Figure GDA0003722849740000041
the piecewise linear color transformation is carried out on the face image by adopting the following formula:
Figure GDA0003722849740000042
the human face modeling module eliminates noise of the modeled human face image in a mode of an expansion elimination method and a corrosion elimination method, the expansion elimination method and the corrosion elimination method are carried out simultaneously, and an N-level division band is established in the modeled human face image and comprises a first-level division band, a second-level division band, … and an Nth-level division band; the first-level division band is directly divided on the modeled face image into N rectangular areas with equal areas, and an expansion elimination method is carried out in the rectangular areas, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; the second-level division belt is divided on the basis of the first-level division belt, the rectangular areas divided by the first-level division belt and having the same area are further divided into N rectangular areas having the same area, and in the rectangular areas, an expansion elimination method is carried out, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; and analogizing, dividing the N-level division belt into N rectangular areas with equal areas in the rectangular areas divided by the N-1-level division belt, performing an expansion elimination method and a corrosion elimination method to finally obtain a processed face image, and storing the processed face image into a face database; n is a positive integer larger than 1 and is specified by a user, and the larger the numerical value of N is, the higher the dividing precision is.
In the invention, the face recognition module is used for extracting and recognizing the features of the face image in the face database, and the specific steps comprise:
b1, carrying out geometric normalization processing and graying processing on the face image in the face database;
b2, placing a data memory in the face recognition module, wherein the data memory can call a conversion function beta, and the conversion function beta is used for running a K-L algorithm; representing the face images in a group of face databases into n x m vectors A, wherein the vectors A are composed of m column vectors, each column vector represents a non-negative gray value of the face image, the vectors A serve as input and are input into a data memory, the data memory is divided into m data segments, each data segment has a logic address, and the column vectors are distributed into the data segments; the logic addresses are continuous, that is, the logic addresses of the adjacent data segments are also continuous, and the column vectors stored in the adjacent data segments are continuous in the vector A, and the column vectors stored in all the data segments form an n × m vector A; wherein n is the width of the face image and the unit is a pixel, m is the length of the face image and the unit is a pixel;
when the vector A is used as input, an input signal is sent, when the conversion function beta receives the input signal, the conversion function beta can be activated, after the activation, a K-L algorithm contained in the conversion function beta starts to operate, after the operation, the conversion function beta outputs a success signal, the data memory receives the success signal, column vectors stored in all data segments are removed, after the K-L algorithm is operated, a generation matrix of a face image is output, singular value decomposition is carried out according to the generation matrix of the face image, a feature vector of the face image is obtained, the feature vector has a forward relation and a reverse relation on a main analysis method, the forward relation is that the feature vector is successfully projected by the main analysis method, the reverse relation is that the feature vector is projected by the main analysis method and fails, errors are labeled, when the feature vector has the forward relation on the main analysis method, the projection is mapped into a group of coordinate coefficients, marking the coordinate coefficient as k, wherein the k is initially equal to 1, and after marking, if k is equal to k +1, storing the coordinate new hand into a face database;
step b3, when the face reading module reads a new face image, firstly converting the face image into a face image in a DIB format, then processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, then modeling the face skin color in the face image, eliminating noise from the modeled face image, and finally performing feature extraction through the face recognition module to obtain a coordinate coefficient a of the new face image, comparing the coordinate coefficient a of the new face image with a coordinate coefficient in a face database, wherein the more similar the coordinate coefficients are, the higher the similarity of the two images is, and the face image corresponding to the coordinate coefficient closest to the coordinate coefficient a in the face database is the most similar image to the new face image.
The beneficial results of the invention are as follows: has the following advantages: the novel multifunctional anti-theft door face recognition system adopts a plurality of main steps, firstly carries out image conversion and skin color modeling, finally carries out feature extraction and recognition, constructs a basic frame of face recognition, and determines whether the given input image containing the face is a person in a library or not by matching and comparing the input image with a known face library.
Drawings
The above and other advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1a is a distribution diagram of pixels in Ycr subspace.
FIG. 1b is a distribution diagram of pixels in Ycb subspace.
Fig. 1c shows the pixel distribution in the Ycc space after conversion.
Fig. 1d is a projection of a pixel onto CrCb subspace.
Fig. 2 is an image after modeling of human face skin.
Fig. 3 is an image of the dilation process.
Fig. 4 is the center point of the eye.
Fig. 5 is a center point of the mouth.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
In the field of computer vision and pattern Recognition, Face Recognition Technology (FRT) is one of the very challenging issues, and in recent years, with the rapid development and increasing practical needs of the related Technology, it has gradually attracted the attention of more and more researchers. Face recognition has practical and potential applications in many fields, and has a wide application prospect in the aspects such as certificate inspection, bank systems, military security, security inspection and the like. The face recognition technology is used in the judicial field and is used as an auxiliary means for identity verification, criminal identification and the like; the method is used in the commercial field, such as identification of bank credit cards, security identification systems and the like. Just because human face recognition has a wide application prospect, it is becoming a research hotspot in the fields of current pattern recognition and artificial intelligence.
Although human beings can recognize faces and expressions thereof without difficulty, automatic machine recognition of faces is still a highly difficult subject. Compared with other human body biological characteristic identification systems such as fingerprints, retinas, irises, genes, sounds and the like, the human face identification system is more friendly and direct, and a user has no psychological barrier. And some information which is difficult to obtain by other recognition systems can be obtained through the expression/posture analysis of the human face. Besides the applications, the research on the face recognition also has extremely important theoretical value. The human face is a naturally existing complex visual pattern, and the human face is taken as an example of a two-dimensional image or a three-dimensional non-rigid object to carry out research on detection and recognition, so that the development of relevant theories such as image processing, pattern recognition, computer vision and the like is certainly promoted. In addition, face recognition is also a typical research subject in visual cognitive psychology, and the research of face automatic recognition should use the research results of human visual system Orvs) which is most helpful for face recognition on one hand, and on the other hand, will also have positive influence and inspiration on the discussion of related subjects in visual psychology.
The invention provides a face recognition system, which comprises a face reading module, a face processing module, a face recognition module, a face modeling module, a face database and an external camera, wherein the face reading module is used for reading a face; the cameras generally adopted by the face recognition are produced by companies such as Haekangwei and Zhejiang Dahua, and cameras of models such as Dahua DH-IPC-HF82 8229F and Haekang DS-2CD8426FWD/F-I can be used in the system;
the human face reading module reads a human face image through the external camera, the read human face image is in a BMP format, and after reading is finished, the human face image is transmitted to an image transmission belt, and the image transmission belt is used as a data storage buffer of the human face reading module and is used for converting the human face image in the BMP format into a human face image in a DIB format;
the face processing module processes the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, and sends the processed face image to the face modeling module;
the human face modeling module is used for modeling according to the color of the human face skin in each human face image in the DIB format after nonlinear conversion, eliminating noise of the modeled human face image and storing the noise into a human face database;
the face recognition module is used for extracting and recognizing the features of the face images in the face database.
The image transmission band comprises M storage blocks, the storage blocks are connected through Hash pointers, and the Hash pointers are connected in a one-way mode with a sequence.
The memory blocks comprise a data memory area and a check area, only one BMP format image is placed in the data memory area of each memory block, the check area of each memory block contains a check number, the check number in the Mth memory block is used for checking all M-1 memory blocks before the Mth memory block arranged in a one-way connection by using the hash pointer, that is, the check number is used to check whether the face image in BMP format is stored in all the M-1 memory blocks arranged before the mth memory block in one-way connection using the hash pointer, and checks whether the check number in the check areas in all the memory blocks arranged before the mth memory block in one-way connection is 1, if the BMP format image is stored and the check number is 1, the check number with the value of 1 is stored in the check area of the Mth storage block.
The face reading module takes the time for reading the face image in the BMP format as a time stamp to be added to the storage blocks, and stores the face image in the BMP format in 1-M storage blocks according to the sequence of the time stamp, namely, the face image with the earliest reading time is stored in the first storage block.
In this embodiment, a face reading module reads a face image through the external camera, the read face image is in a BMP format, the face image is transmitted to an image transmission belt after reading, assuming that 12 storage blocks exist in the front, and 12 is between 1 and 255, and meets a numerical value requirement, the face reading module reads the face image through the external camera and places the face image in the storage blocks, before placing, it is detected whether a check number with a numerical value of 1 is stored in a check area in the 11 storage blocks in the front, and if the condition is met, an image in the BMP format is placed in the storage blocks; the face reading module takes out a BMP format face image in a first storage block from an image transmission belt, converts the BMP format face image into a DIB format face image, transmits the DIB format face image to a face processing module, the face processing module processes the DIB format face image, the taken BMP format face image is converted into the DIB format face image, performs light compensation on the image, then uses a distributed node to store pixels, the pixels are stored in a matrix form, X takes 5.8, the pixels which are linearly amplified and arranged in the first 5.8 percent are sequentially placed in the distributed nodes, the number of the distributed nodes is determined according to the number of the pixels, a distributed node group is formed within T time, the time for starting and ending the building of the distributed node group is stored by a main node, and the time for recording the building of the distributed nodes, then the face processing module performs nonlinear conversion, the nonlinear conversion is used to eliminate the dependency relationship of brightness on chroma, for example, fig. 1a and fig. 1b are the pixel distribution diagrams of Ycr and Ycb subspaces of the image, fig. 1c is the pixel distribution in the Ycc space after conversion, fig. 1d is the projection of the pixel on CrCb subspaces, the face modeling module matches the chroma components by using a chroma information ellipse formula to perform face skin modeling on the image after nonlinear conversion, the image after face skin modeling is as shown in fig. 2, the left side of fig. 2 is the result of a set of formulas in the middle process of face skin modeling, the right side of fig. 2 is the final result of face skin modeling, then the face modeling module eliminates noise from the modeled face image in a way of expansion elimination method and corrosion elimination method, in order to eliminate the influence on image quality caused by the blurring of the image itself and rough obstacles on the target image, the final processed image is shown in fig. 3 by the conversion of black and white pixels, and it is noted that the processed image differs from the original image in all processes, the swelling process is shown in the left image of fig. 3, and the erosion removal is shown in the right image of fig. 3.
The face recognition module is configured to perform feature extraction and recognition on a face image in a face database, select eyes and a mouth of a face, remove a non-eye region, obtain a center point of the eyes by performing double matching on brightness and chromaticity of the eyes and removing the non-eye region, as shown in fig. 4, determine that the location of the mouth is similar to that of the eyes and also according to Cr and Cb components of the image, the mouth region is white, and a plurality of white regions are provided in addition to the mouth region and discrete points are removed to obtain a center point of the mouth, and the center point of the mouth is obtained as shown in fig. 5.
The face reading module takes out the BMP format face image in the data storage area in the first storage block in the image transmission band at a fixed frequency and converts the BMP format face image into a DIB format face image, the BMP format face image data is not compressed, the digital image is required to be processed by the digital processing of the image, the uncompressed BMP format face image corresponds to the actual digital image, the BMP format face image is read into the memory of the face reading module, and the BMP format face image is directly read into the DIB format face image; BMP (full name Bitmap) is a standard image file format in the Windows operating system, and is divided into two categories: the device has a vector dependent bitmap (DDB) and a device non-vector dependent bitmap (DIB), so it can be directly read as a face image in DIB format. And sending the image to a face processing module, then destroying the first storage block by the face reading module, adding a new storage block at the tail part of the image transmission belt, putting the face image read by an external camera into a data storage area in the new storage block, and simultaneously starting to verify M-1 storage blocks before the new storage block by a verification number. Wherein M is a positive integer greater than 1, and the value range of M is 1 to 255; the fixed frequency can be freely set by a user, and can be generally set as: taking T images in BMP format in the first storage block every minute, wherein T is a natural number from 1 to 100.
The face processing module performs related processing on the face image in the DIB format, and specifically includes:
a1, the face processing module performs light compensation on the DIB-format face image, that is, arranging pixels of the DIB-format face image from high to low, extracting pixels of the DIB-format face image arranged at the first X%, and linearly amplifying the pixels of the DIB-format face image arranged at the first X% until the average pixel of the DIB-format face image reaches 255; x is a real number between 3 and 8;
the idea of light compensation is based on the basic idea proposed by Anil K Jain et al to arrange the brightness of all pixels in the whole image from high to low, extract the 5% pixels with the maximum brightness in the picture, and then linearly amplify them to make the average brightness of these pixels reach 255 in order to counteract the color deviation existing in the whole image. The proportion of the pixels with the maximum brightness of X% of the linear amplification of the invention, namely the extraction of the pixels with the maximum brightness, is between 3 and 8.
Storing pixels of DIB-format face images arranged in the first X% by using a distributed unit before each linear amplification;
the distributed unit comprises more than two distributed nodes, and the distributed nodes form a distributed node group, wherein the number of the distributed nodes in one distributed node group is not less than M, each distributed node stores pixels of the DIB-format face images which are arranged at the first X% in an amplified mode each time, a time period T is set, the range of the T is 3 days to 18 days, a clock is used for timing, the starting point of the timing is 0, the pixels of the DIB-format face images which are arranged at the first X% in an amplified mode each time are stored in the distributed nodes, the pixels of the DIB-format face images which are arranged at the first X% in an amplified mode each time are stored in another distributed node after the storage is finished, and when the time of the clock is T, the distributed nodes stored from the time 0 to the time T form one distributed node group; wherein M is a positive integer greater than 1, and the value range of M is 1 to 255;
in the next time period T, pixels of the face image which is enlarged each time and is arranged in the first X% DIB format are stored in the distributed nodes which are not used, and the stored distributed nodes form a new distributed node group; the process of storing in the distributed nodes is repeated circularly by taking a time period T as a period, and the clock is cleared before each period; in addition, in each distributed node group, a user manually designates a main node, and the main node only stores the time for starting to store and the time for finishing to store of each distributed node group;
step a2, the face processing module performs nonlinear conversion on the DIB-format face image processed in step a1 to eliminate the dependency of color values on brightness values in the DIB-format image, in the process of nonlinear conversion, conversion regions are divided on the DIB-format face image, each conversion region is an area range divided on the DIB-format face image, so that the areas of the conversion regions are equal and are not overlapped with each other, each conversion region is subjected to nonlinear conversion until all conversion regions are subjected to nonlinear conversion, and the DIB-format face image after nonlinear conversion is obtained. The nonlinear conversion process of the invention adopts the nonlinear conversion technology proposed in section 3.1.4 of Master thesis of Wangfang in China ocean university 'application of face recognition technology based on facial feature localization in anti-theft doors';
the human face modeling module is used for modeling according to the human face skin color in each human face image in the DIB format after nonlinear conversion, the human face image in the DIB format after nonlinear conversion is coded before modeling, the coding form is that the time for starting storage is stored in a main node in a distributed node group where the human face image in the DIB format is located, then the time for ending storage is stored, on the basis, the number of the human face image in the DIB format after nonlinear conversion is a positive integer which is more than 0, the positive integer is sequentially increased from 1, and the code is expressed into a binary number and is used as a formal code; the number of the face image in the DIB format after the nonlinear conversion is numbered from 1 according to the time sequence of the face image in the DIB format after the nonlinear conversion, the number is from 001 to 255, the face image is cleared to 255, then the face image is numbered from 001, if the number is 133, the storage starting time stored in the master node in the distributed node group where the face image in the DIB format is located is 09 minutes 45 seconds at 15 points in 3 and 5 months in 2019, 15 points in 5 and 5 months in 3 and 5 months in 2019, 10 minutes 23 seconds at 15 points in 5 months in 3 and 5 months in 2019, the number is 20190305150945151023133, the same year, month and day are omitted, if the years are the same, the year, month and month are the same, and then the face image is encoded into a binary code.
The modeling process of the face modeling module comprises the following steps: converting the face image from an RGB space with high color component correlation to a YCbCr color space with low color component correlation, then carrying out piecewise linear color transformation on the face image, and finally projecting the face image to a two-dimensional subspace to obtain a skin color cluster model, thereby obtaining the modeled face image; the method is characterized in that a face image is projected to a two-dimensional subspace to obtain a skin color clustering model, which is a common image processing mode, and the technology is involved in ' skin color clustering face detection method of YCgCr color space ' published in computer engineering and application by Zhang Zhen, and ' 3D modeling based on 2D head images ' published in the university report of Nankai's college and optimization by Song dynasty Steel and Gao Xin, and the like; and finally, putting the formal codes into P data segments, wherein P is an integer greater than 1, each data segment is provided with a primary key, and the primary keys are in one-to-one correspondence with the formal codes.
The modeling process of the face modeling module comprises the following steps: from the RGB space with high color component correlation to the YCbCr color space with low color component correlation (reference is made in particular to the literature: shenyu, xupanduan, published in the human face localization studies of skin color modeling and skin color segmentation of journal photonics, where Y refers to the luminance component, Cb refers to the blue chrominance component, and Cr refers to the red chrominance component):
Figure GDA0003722849740000111
the piecewise linear color transformation is carried out on the face image by adopting the following formula:
Figure GDA0003722849740000112
Figure GDA0003722849740000113
wherein, K i 、K h Respectively representing the ith luminance component, the h th luminance component, C i (Y) denotes the ith piecewise linear color transform for the luminance component,
Figure GDA0003722849740000114
represents the piecewise linear color-transformed average of all luminance components,
Figure GDA0003722849740000115
the component representing the skin tone after the ith piecewise linear color transform of all components,
Figure GDA0003722849740000116
a component representing the skin color after the ith piecewise linear color transform of the luminance component;
the human face modeling module eliminates noise of the modeled human face image in an expansion elimination method and a corrosion elimination method, the expansion elimination method and the corrosion elimination method are carried out simultaneously, and an N-level division band is established in the modeled human face image and comprises a first-level division band, a second-level division band, … and an Nth-level division band; the first-level division band is directly divided on the modeled face image into N rectangular areas with equal areas, and an expansion elimination method is carried out in the rectangular areas, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; the second-level division belt is divided on the basis of the first-level division belt, the rectangular areas divided by the first-level division belt and having the same area are further divided into N rectangular areas having the same area, and in the rectangular areas, an expansion elimination method is carried out, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; and analogizing, dividing the N-level division belt into N rectangular areas with equal areas in the rectangular areas divided by the N-1-level division belt, performing an expansion elimination method and a corrosion elimination method to finally obtain a processed face image, and storing the processed face image into a face database. N is a positive integer larger than 1 and is specified by a user, and the larger the numerical value of N is, the higher the dividing precision is. N in the invention is specified as a positive integer between 16 and 25, the dividing precision of the face image is limited in an available range, and the complexity of the algorithm is reduced appropriately;
the face recognition module is used for extracting and recognizing the features of the face images in the face database, and the specific process is as follows:
b1, carrying out geometric normalization processing and graying processing on the face image in the face database; when the expression of the face is recognized, the normalization processing of the face is a crucial ring, and relates to the quality of the next processing.
The normalization of the human face comprises geometric normalization and gray level normalization, wherein the geometric normalization is divided into two steps: face correction and face cropping. And the gray scale normalization is mainly to increase the contrast of the image and perform illumination compensation.
The purpose of geometric normalization is mainly to convert the expression subimages into uniform sizes, which is beneficial to the extraction of expression features. The method comprises the following specific steps:
(1) and (4) calibrating the characteristic points, namely calibrating three characteristic points of two eyes and a nose by using a [ x, y ] ═ ginput (3) function. The coordinate values of the three characteristic points are obtained by mainly using manual calibration of a mouse.
(2) And rotating the image according to the coordinate values of the left eye and the right eye to ensure the consistency of the human face direction. Let the distance between the two eyes be d, with the midpoint being O.
(3) And determining a rectangular feature region according to the facial feature points and the geometric model, and clipping the rectangular regions of which the left and right sides are respectively clipped by d and the vertical direction is respectively 0.5d and 1.5d by taking O as a reference.
(4) The expression subarea images are subjected to scale conversion to be uniform in size, so that the extraction of expression features is facilitated. And unifying the specifications of the intercepted images to realize the geometric normalization of the images. The geometric normalization processing is related to a plurality of human face recognition related papers such as a master paper 'research on a human face recognition system and a human face detection algorithm' of the royal bud, a master paper 'application of image processing in human face recognition' of high feather and good quality, and the like; guo le, Yangpo, Guo Huan, the application of image processing technology in face recognition, in section 2.1, was specifically described;
b2, placing a data memory in the face recognition module, wherein the data memory can call a conversion function beta, and the conversion function beta is used for running a K-L algorithm; representing the face images in a group of face databases into n x m vectors A, wherein the vectors A are composed of m column vectors, each column vector represents a non-negative gray value of the face image, the vectors A serve as input and are input into a data memory, the data memory is divided into m data segments, each data segment has a logic address, and the column vectors are distributed into the data segments; the logic addresses are continuous, that is, the logic addresses of the adjacent data segments are also continuous, and the column vectors stored in the adjacent data segments are continuous in the vector A, and the column vectors stored in all the data segments form an n × m vector A; wherein n is the width of the face image and the unit is a pixel, m is the length of the face image and the unit is a pixel; the face images in the face database are between 8 and 10 images;
when the vector A is used as input, an input signal is sent, when the conversion function beta receives the input signal, the conversion function beta can be activated, after the activation, a K-L algorithm contained in the conversion function beta starts to operate, after the operation, the conversion function beta outputs a success signal, the data memory receives the success signal, column vectors stored in all data segments are removed, after the K-L algorithm is operated, a generation matrix of a face image is output, singular value decomposition is carried out according to the generation matrix of the face image, the feature vector of the face image is obtained, the feature vector has a forward relation and a reverse relation on a main analysis method, the forward relation is that the feature vector is successfully projected by the main analysis method, the reverse relation is that the feature vector fails to be projected by the main analysis method, errors are labeled, when the feature vector has the forward relation on the main analysis method, the projection is mapped into a group of coordinate coefficients, marking the coordinate coefficient as k, wherein k is initially equal to 1, and after marking, storing the coordinate coefficient into a face database, wherein k is k + 1; regarding the K-L algorithm, it is a commonly used algorithm in image processing, and is specifically mentioned in documents such as "review of feature face recognition method based on K-L transformation (PCA)" by cheng longyu and leixiu jade; singular value decomposition is also a technology often used in image processing, and is specifically introduced in the singular value decomposition face recognition algorithm of Zhao Hui Lin, and the principal analysis method of the present invention is also referred to as a principal component analysis method and a principal component analysis method (PCA);
step b3, when the face reading module reads a new face image, firstly converting the face image into a face image in a DIB format, then processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, then modeling the face skin color in the face image, eliminating noise from the modeled face image, and finally performing feature extraction through the face recognition module to obtain a coordinate coefficient a of the new face image, comparing the coordinate coefficient a of the new face image with a coordinate coefficient in a face database, wherein the more similar the coordinate coefficients are, the higher the similarity of the two images is, and the face image corresponding to the coordinate coefficient closest to the coordinate coefficient a in the face database is the most similar image to the new face image. The pixels of the face image in the present invention may refer to pixel values.
Most of the traditional anti-theft systems adopt various keys to guarantee the property security of people, such as general door locks, automobiles, safe cases and the like. In the network life of people, passwords also become important means for guaranteeing privacy and property safety of people, such as e-mail passwords, starting passwords of equipment, bank accounts, online payment passwords, forum login passwords and the like, the two safety systems are traditional safety guarantee modes, and have common defects that the passwords are easy to lose, or forget, or even possibly be cracked by people, and the safety of the people is more and more fragile along with the development of the society. And the personal identity confirmation and the authority confirmation are required to be carried out at any time in our life. With the development of modern technology, the biometric technology has also been developed more rapidly. Biometric identification technology provides a better solution to this problem. The most developed technology for biometric identification is fingerprint identification. Fingerprints have much information available. The fingerprint characteristics of each person generally do not change over the life of the person, and fingerprints between different persons have different fingerprint characteristics. Therefore, the fingerprint characteristics of people can be well used as the password information according to the characteristics. Fingerprints have four important properties as cryptographic information: universality, uniqueness, lifetime invariance and inseparability.
The present invention provides a face recognition system, and a method and a way for implementing the technical solution are many, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, a plurality of modifications and embellishments can be made without departing from the principle of the present invention, and these modifications and embellishments should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (1)

1. A face recognition system is characterized by comprising a face reading module, a face processing module, a face recognition module, a face modeling module, a face database and an external camera;
the human face reading module comprises an image transmission belt, the human face reading module reads a human face image through the external camera, the read human face image is in a BMP format, the human face image is transmitted to the image transmission belt after the human face image is read, and the image transmission belt is used as a data storage buffer of the human face reading module and is used for converting the human face image in the BMP format into a human face image in a DIB format;
the face processing module is responsible for processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, and sending the processed face image to the face modeling module;
the human face modeling module is used for modeling according to the color of the human face skin in each human face image in the DIB format after nonlinear conversion, eliminating noise of the modeled human face image and storing the noise into a human face database;
the face recognition module is used for extracting and recognizing the features of the face images in the face database;
the image transmission band comprises M storage blocks, the storage blocks are connected by Hash pointers, and the Hash pointers are connected in a one-way mode with a sequence; the storage blocks comprise a data storage area and a check area, only one BMP format image is placed in the data storage area of each storage block, the check area of each storage block comprises a check number, the check number in one storage block is used for checking all storage blocks which are arranged in front of the storage block and are connected in a unidirectional mode by using a hash pointer, namely the check number is used for checking whether face images in the BMP format are stored in all storage blocks which are arranged in front of the storage block and are connected in a unidirectional mode by using the hash pointer or not, checking whether the check numbers in the check areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode are all 1 or not, if all the BMP format images are stored in the data storage areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode and the check numbers in all the check areas in all storage blocks which are arranged in front of the storage block in a unidirectional mode are all 1 or not, storing a check number with the value of 1 in a check area of the storage block, wherein the initial value of the check number is 0; the default of the check number in the check area in the first generated storage block is 1;
the face reading module takes the time for reading the face image in the BMP format as a time stamp and adds the time stamp to the storage blocks for storing the face image in the BMP format, and the face image in the BMP format is respectively stored in 1-M storage blocks according to the sequence of the time stamp, namely the face image with the earliest reading time is stored in a first storage block and then is stored in the storage blocks one by one according to the sequence of the reading time;
the face reading module starts to check M-1 storage blocks before the new storage block through a check number, whether the check number in the M-1 storage blocks before the check is 1 or not is checked, only when the check number is 1, the face image in the BMP format in the data storage area in the first storage block in the image transmission band is taken out at a fixed frequency, converted into a face image in a DIB format and sent to the face processing module, then the face reading module destroys the first storage block, adds a new storage block at the tail part of the image transmission band, and puts the face image read by an external camera into the data storage area in the new storage block;
the face processing module is responsible for processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, and sending the processed face image to the face modeling module, and the specific steps comprise:
a1, the face processing module performs light compensation on the DIB-format face image, that is, arranging pixels of the DIB-format face image from high to low, extracting pixels of the DIB-format face image arranged at the first X%, and linearly amplifying the pixels of the DIB-format face image arranged at the first X% until the average pixel of the DIB-format face image reaches 255; a distributed unit is arranged in the face processing module, and is used for storing the pixels of the face image in the DIB format which is arranged in the front X% before linear amplification;
a2, the face processing module performs nonlinear conversion on the face image in the DIB format processed in the step a1, in the process of nonlinear conversion, conversion regions are divided on the face image in the DIB format, each conversion region is an area range divided on the face image in the DIB format, so that the areas of the conversion regions are equal and are not overlapped with each other, each conversion region is subjected to nonlinear conversion until all conversion regions are subjected to nonlinear conversion, and the face image in the DIB format after the nonlinear conversion is obtained; wherein, the conversion area is a regular square or rectangle;
the distributed unit comprises more than two distributed nodes which can be formed into a distributed node group again, wherein the number of the distributed nodes in one distributed node group is not less than M, each distributed node stores pixels of DIB-format face images arranged in the first X% of the pixels in each amplified mode, a time period T is set, a clock is used for timing, the starting point of timing is 0, the pixels of the DIB-format face images arranged in the first X% of the pixels in each amplified mode are stored into the distributed nodes, the pixels are stored into another distributed node after the storage is finished, and when the time of the clock is T, the distributed nodes stored from time 0 to T are formed into one distributed node group;
in the next time period T, pixels of the DIB-format face image which is arranged in the first X% of the enlarged images in each time are stored in the distributed nodes which are not used, and the stored distributed nodes form a new distributed node group; the process of storing in the distributed nodes is repeated in a cycle of a time period T, and the clock is cleared before each cycle; in addition, in each distributed node group, a master node is assigned, and the master node only stores the time for starting storage and the time for finishing storage of each distributed node group;
the face modeling module models according to the face skin color in each face image in the DIB format after nonlinear conversion, eliminates noise of the modeled face image, and then stores the face image into a face database, and the specific steps comprise:
the human face modeling module is used for modeling according to the human face skin color in each human face image in the DIB format after nonlinear conversion, the human face image in the DIB format after nonlinear conversion is coded before modeling, the coded form is that the time for starting storage stored in a main node in a distributed node group where the human face image in the DIB format is located is followed by the time for finishing storage, on the basis, the number of the human face image in the DIB format after nonlinear conversion is followed by the number of a human face image in the DIB format after nonlinear conversion, the number is a positive integer larger than 0, the human face image in the DIB format is sequentially increased from 1, and the code is expressed into a binary number as a formal code; the modeling process of the face modeling module comprises the following steps: converting the face image from an RGB space with high color component correlation to a YCbCr color space with low color component correlation, then carrying out piecewise linear color transformation on the face image, and finally projecting the face image to a two-dimensional subspace to obtain a skin color clustering model, thereby obtaining the modeled face image; finally, putting the formal codes into P data sections, wherein P is an integer larger than 1, each data section is provided with a main key, and the main keys are in one-to-one correspondence with the formal codes;
the human face modeling module eliminates noise of the modeled human face image in an expansion elimination method and a corrosion elimination method, the expansion elimination method and the corrosion elimination method are carried out simultaneously, and an N-level division band is established in the modeled human face image and comprises a first-level division band, a second-level division band, … and an Nth-level division band; the first-level division band is directly divided on the modeled face image into N rectangular areas with equal areas, and an expansion elimination method is carried out in the rectangular areas, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; the second-level division belt is divided on the basis of the first-level division belt, the rectangular areas divided by the first-level division belt and having the same area are further divided into N rectangular areas having the same area, and in the rectangular areas, an expansion elimination method is carried out, namely if only one white pixel exists in all pixels in the rectangular areas, the white pixel is changed into a black pixel, and meanwhile, a corrosion elimination method is carried out, namely if only one black pixel exists in all pixels in the rectangular areas, the black pixel is changed into a white pixel; and analogizing, dividing the N-level division belt into N rectangular areas with equal areas in the rectangular areas divided by the N-1-level division belt, performing an expansion elimination method and a corrosion elimination method to finally obtain a processed face image, and storing the processed face image into a face database;
the face recognition module is used for extracting and recognizing the features of the face images in the face database, and comprises the following specific steps:
b1, carrying out geometric normalization processing and graying processing on the face image in the face database;
b2, placing a data memory in the face recognition module, wherein the data memory can call a conversion function beta, and the conversion function beta is used for operating a K-L algorithm; representing the face images in a group of face databases into n x m vectors A, wherein the vectors A are composed of m column vectors, each column vector represents a non-negative gray value of the face image, the vectors A serve as input and are input into a data memory, the data memory is divided into m data segments, each data segment has a logic address, and the column vectors are distributed into the data segments; the logic addresses are continuous, that is, the logic addresses of the adjacent data segments are also continuous, and the column vectors stored in the adjacent data segments are continuous in the vector A, and the column vectors stored in all the data segments form an n × m vector A; wherein n is the width of the face image and the unit is pixel, m is the length of the face image and the unit is pixel;
when the vector A is used as input, an input signal is sent, when the conversion function beta receives the input signal, the conversion function beta can be activated, after the activation, a K-L algorithm contained in the conversion function beta starts to operate, after the operation, the conversion function beta outputs a success signal, the data memory receives the success signal, column vectors stored in all data segments are removed, after the K-L algorithm is operated, a generation matrix of a face image is output, singular value decomposition is carried out according to the generation matrix of the face image, a feature vector of the face image is obtained, the feature vector has a forward relation and a reverse relation on a main analysis method, the forward relation is that the feature vector is successfully projected by the main analysis method, the reverse relation is that the feature vector is projected by the main analysis method and fails, errors are labeled, when the feature vector has the forward relation on the main analysis method, the projection is mapped into a group of coordinate coefficients, marking the coordinate coefficient as k, wherein k is initially equal to 1, and after marking, storing the coordinate new hand into a face database, wherein k is k + 1;
step b3, when the face reading module reads a new face image, firstly converting the face image into a face image in a DIB format, then processing the face image in the DIB format to obtain a face image in the DIB format after nonlinear conversion, then modeling the face skin color in the face image, eliminating noise from the modeled face image, and finally performing feature extraction through the face recognition module to obtain a coordinate coefficient a of the new face image, comparing the coordinate coefficient a of the new face image with a coordinate coefficient in a face database, wherein the more similar the coordinate coefficients are, the higher the similarity of the two images is, and the face image corresponding to the coordinate coefficient closest to the coordinate coefficient a in the face database is the most similar image to the new face image.
CN201911075118.XA 2019-11-06 2019-11-06 Face recognition system Active CN110852239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911075118.XA CN110852239B (en) 2019-11-06 2019-11-06 Face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911075118.XA CN110852239B (en) 2019-11-06 2019-11-06 Face recognition system

Publications (2)

Publication Number Publication Date
CN110852239A CN110852239A (en) 2020-02-28
CN110852239B true CN110852239B (en) 2022-08-30

Family

ID=69599529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911075118.XA Active CN110852239B (en) 2019-11-06 2019-11-06 Face recognition system

Country Status (1)

Country Link
CN (1) CN110852239B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429709A (en) * 2020-04-29 2020-07-17 广东佳视通高新科技有限公司 Face recognition key personnel information screening system
CN113313718B (en) * 2021-05-28 2023-02-10 华南理工大学 Acute lumbar vertebra fracture MRI image segmentation system based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426439A (en) * 2015-11-05 2016-03-23 腾讯科技(深圳)有限公司 Metadata processing method and device
WO2016172982A1 (en) * 2015-04-30 2016-11-03 深圳市银信网银科技有限公司 Data recording method, device and system, and computer storage medium
CN109697776A (en) * 2018-12-24 2019-04-30 绿瘦健康产业集团有限公司 A kind of intelligent control method, device, equipment and the medium of unmanned gymnasium
CN110009515A (en) * 2019-03-12 2019-07-12 中国平安财产保险股份有限公司 Document method of calibration, device, server and medium based on recognition of face
CN110263035A (en) * 2019-05-31 2019-09-20 阿里巴巴集团控股有限公司 Data storage, querying method and device and electronic equipment based on block chain

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016172982A1 (en) * 2015-04-30 2016-11-03 深圳市银信网银科技有限公司 Data recording method, device and system, and computer storage medium
CN105426439A (en) * 2015-11-05 2016-03-23 腾讯科技(深圳)有限公司 Metadata processing method and device
CN109697776A (en) * 2018-12-24 2019-04-30 绿瘦健康产业集团有限公司 A kind of intelligent control method, device, equipment and the medium of unmanned gymnasium
CN110009515A (en) * 2019-03-12 2019-07-12 中国平安财产保险股份有限公司 Document method of calibration, device, server and medium based on recognition of face
CN110263035A (en) * 2019-05-31 2019-09-20 阿里巴巴集团控股有限公司 Data storage, querying method and device and electronic equipment based on block chain

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
人脸表情识别研究;丁明;《中国优秀硕士学位论文全文数据库信息科技辑》;20140815;第22-32页 *
基于五官特征定位的人脸识别技术在防盗门中的应用;王芳;《中国优秀硕士学位论文全文数据库信息科技辑》;20070215;第6-49页 *
王芳.基于五官特征定位的人脸识别技术在防盗门中的应用.《中国优秀硕士学位论文全文数据库信息科技辑》.2007, *
肤色建模和肤色分割的人脸定位研究;沈常宇,许潘园;《光电工程》;20070930;第103-107页 *

Also Published As

Publication number Publication date
CN110852239A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
Lee et al. Intra-class variation reduction using training expression images for sparse representation based facial expression recognition
US11157721B2 (en) Facial image recognition using pseudo-images
CN110705392A (en) Face image detection method and device and storage medium
US20230076017A1 (en) Method for training neural network by using de-identified image and server providing same
CN110852239B (en) Face recognition system
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
CN104217503A (en) Self-service terminal identity identification method and corresponding house property certificate printing method
JP2005259049A (en) Face collation device
Colombo et al. Detection and restoration of occlusions for 3D face recognition
CN111542821A (en) Personal authentication method and personal authentication device
CN117314714A (en) Document image falsification detection and classification method based on double-domain and multi-scale network
Choudhary et al. Multimodal biometric-based authentication with secured templates
CN113190858B (en) Image processing method, system, medium and device based on privacy protection
Jagadeesh et al. DBC based Face Recognition using DWT
Majeed et al. A novel method to enhance color spatial feature extraction using evolutionary time-frequency decomposition for presentation-attack detection
Chen et al. Face recognition using markov stationary features and vector quantization histogram
Alirezaee et al. An efficient algorithm for face localization
CN109359616B (en) Pseudo-concatenation small-size fingerprint identification algorithm based on SIFT
EP4307214A1 (en) Image processing device, image processing method, and program
Di Martino et al. Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art 2D Method
CN116628660B (en) Personalized face biological key generation method based on deep neural network coding
Amelia Age Estimation on Human Face Image Using Support Vector Regression and Texture-Based Features
Pyataeva et al. Video based face recognition method
Zheng Near infrared face recognition using orientation-based face patterns
Thakur et al. Localisation of spliced region using pixel correlation in digital images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant