CN108764121B - Method for detecting living object, computing device and readable storage medium - Google Patents

Method for detecting living object, computing device and readable storage medium Download PDF

Info

Publication number
CN108764121B
CN108764121B CN201810510901.3A CN201810510901A CN108764121B CN 108764121 B CN108764121 B CN 108764121B CN 201810510901 A CN201810510901 A CN 201810510901A CN 108764121 B CN108764121 B CN 108764121B
Authority
CN
China
Prior art keywords
image
gray scale
distance
average gray
scale interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810510901.3A
Other languages
Chinese (zh)
Other versions
CN108764121A (en
Inventor
王晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EYESMART TECHNOLOGY Ltd
Original Assignee
Shima Ronghe Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shima Ronghe Shanghai Information Technology Co ltd filed Critical Shima Ronghe Shanghai Information Technology Co ltd
Priority to CN201810510901.3A priority Critical patent/CN108764121B/en
Publication of CN108764121A publication Critical patent/CN108764121A/en
Application granted granted Critical
Publication of CN108764121B publication Critical patent/CN108764121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a living object, which is suitable for being executed in a computing device, and comprises the following steps: receiving a face gray image of an object to be detected; intercepting a detection area from a human face gray level image, wherein the detection area at least comprises eyes; quantizing the detection area to obtain a quantized image; extracting image features according to the quantized image; whether the object to be detected is a living object is determined based on the acquired image features. The invention also discloses a corresponding computing device and a readable storage medium.

Description

Method for detecting living object, computing device and readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a computing device, and a readable storage medium for detecting a living object.
Background
With the continuous improvement of the safety precaution requirements and consciousness of people, identity authentication technologies based on biological characteristic recognition, such as fingerprint recognition, iris recognition, face recognition and the like, are rapidly developed and widely applied. However, in practical application, various malicious attack means such as photo spoofing, video spoofing, three-dimensional model spoofing and the like continuously appear, and potential safety hazards are brought to an identification system. Among them, the photo is the most common attack mode with low cost and simple implementation. In order to defend against such attacks, it is necessary to determine whether the biometric features are from living individuals, i.e., in vivo testing.
At present, most of living body detection technologies utilize physiological characteristics of people to detect. For example, the living body detection of the human face can be based on information such as head rotation, red eye effect and the like, and the scheme has the problems of complex system and high cost or needs active cooperation of a user, so that the use experience of the user is reduced.
Therefore, a more advanced scheme for detecting a living subject is urgently required.
Disclosure of Invention
To this end, the present invention provides a method, a computing device and a readable storage medium for detecting a living object in an attempt to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the invention, there is provided a method for detecting a living object, adapted to be executed in a computing device, the method comprising the steps of: receiving a face gray image of an object to be detected; intercepting a detection area from a human face gray level image, wherein the detection area at least comprises eyes; quantizing the detection area to obtain a quantized image; extracting image features according to the quantized image; whether the object to be detected is a living object is determined based on the acquired image features.
Optionally, in the method according to the present invention, the step of quantizing the detection region to obtain a quantized image includes: calculating the average gray value of the detection area; and if the average gray value is in a preset average gray interval, quantizing the pixels of which the gray values are in the preset gray interval corresponding to the preset average gray interval in the detection area, and quantizing the gray values into a preset number of gray levels.
Alternatively, in the method according to the present invention, the predetermined average gray scale section includes one or more of a first average gray scale section, a second average gray scale section, a third average gray scale section, and a fourth average gray scale section, and the predetermined gray scale section includes one or more of a first gray scale section corresponding to the first average gray scale section, a second gray scale section corresponding to the second average gray scale section, a third gray scale section corresponding to the third average gray scale section, and a fourth gray scale section corresponding to the fourth average gray scale section.
Alternatively, in the method according to the present invention, the first average gray scale interval is [65, 100 ], the second average gray scale interval is [100,145 ], the third average gray scale interval is [145, 180 ], and the fourth average gray scale interval is [180, 210); a first gray scale section corresponding to the first average gray scale section is [100, 200 ], a second gray scale section corresponding to the second average gray scale section is [130, 230 ], a third gray scale section corresponding to the third average gray scale section is [170, 255], and a fourth gray scale section corresponding to the fourth average gray scale section is [200, 255 ].
Optionally, in the method according to the present invention, the image feature includes an image contour feature, and the step of extracting the image contour feature from the quantized image includes: extracting at least one pair of contour lines according to borders of different gray levels in the quantized image, wherein each pair of contour lines comprises a left contour line and a right contour line; for each pair of contour lines, a distance curve is calculated in which the distance between the left contour line and the right contour line varies with the row direction of the quantized image.
Optionally, in the method according to the present invention, the quantized image includes a left image and a right image, and the step of extracting at least one pair of contour lines according to a boundary of different gray levels in the quantized image includes: for the junction of every two gray levels, in the left image and the right image respectively, one or more pixels corresponding to the junction in each row of pixels are determined, and the pixel farthest away from the perpendicular bisector between the eyes is extracted to form a left contour line and a right contour line of a pair of contour lines respectively.
Optionally, in the method according to the present invention, the image feature includes an image gray scale feature, and the step of extracting the image gray scale feature from the quantized image includes: calculating a gray distribution histogram of the quantized image; and calculating a gray projection curve of the quantized image in the horizontal direction.
Optionally, in the method according to the present invention, the step of extracting image grayscale features from the quantized image further includes: and if the average gray value of the detection area is in the common average gray value interval, calculating the average gray value of the nose or the eyes.
Alternatively, in the method according to the invention, the average gray scale interval is typically [65, 130 ].
Alternatively, in the method according to the present invention, the step of determining whether the object to be detected is a living object based on the acquired image features includes: based on the image contour features, at least one of the following determination conditions is determined, where the distance curve takes the line direction of the quantized image as the horizontal axis and the distance between the left contour line and the right contour line as the vertical axis: intersection of lower gray levels on the same abscissaThe ordinate of the distance curve of the pair of contour lines corresponding to the boundary is smaller than the ordinate of the distance curve of the pair of contour lines corresponding to the higher gray level boundary; for each distance curve, the distance curve is in a first abscissa domain U1There is a first minimum value, U1={x||x-a|<δ1A is the abscissa corresponding to the eye in the quantized image, delta1Is the zeroth multiple of the interocular distance; for each distance curve, there is at least one local maximum for that distance curve; for each distance curve, the distance curve is in a second abscissa domain U2Has a second maximum value, and the difference between the abscissa corresponding to the second maximum value and the first minimum value is not more than the first multiple of the distance between the two eyes, U2X | x > a }; for each distance curve, the first minimum value is greater than a second multiple of the interocular distance and less than a third multiple of the interocular distance; for each distance curve, the second maximum value is not greater than the fourth multiple of the interocular distance and not less than the fifth multiple of the interocular distance; for each distance curve, the difference between the second maximum value and the first minimum value is greater than the sixth multiple of the interocular distance; for each distance curve, in a third abscissa domain U3The slope of the distance curve varies monotonically within a predetermined slope range, U3={x|b-δ3X < b, b is the abscissa, delta, corresponding to the second maximum3Is the seventh multiple of the interocular distance; for each distance curve, in the fourth abscissa domain U4The slope of the distance curve varies monotonically within a predetermined slope range, U4={x|b<x<b+δ4},δ4Is the eighth multiple of the interocular distance; for each distance curve, in the fifth abscissa domain U5The slope of the distance curve lies in a second predetermined slope range, U5={x||x-c|<δ5C is the abscissa corresponding to the first minimum value, δ5Is the ninth multiple of the interocular distance; for every two distance curves, in the sixth abscissa domain U6The correlation of the two distance curves is greater than a predetermined correlation value, U6={x|c-δ6<x<b},δ6Is a tenth multiple of the interocular distance.
Alternatively, in the method according to the present invention, the step of determining whether the object to be detected is a living object based on the acquired image features includes: based on the image gradation feature, at least one of the following determination conditions is determined: the gray distribution histogram of the quantized image is continuous; and the correlation coefficient between the slope of the gray level projection curve of the quantized image in the horizontal direction and the slope of the curve obtained after the gray level projection curve is turned over is larger than a preset coefficient.
Alternatively, in the method according to the present invention, if the average grayscale value of the detection region is in the general average grayscale interval, based on the image grayscale characteristics, the following determination conditions are determined: the relation between the average gray value of the nose or the eye and the average gray value of the detection area meets a preset nose or eye gray relation curve.
Alternatively, in the method according to the present invention, the step of determining whether the object to be detected is a living object based on the acquired image features includes: for each judgment condition, obtaining a score of the object to be detected corresponding to the judgment condition based on whether the image contour feature or the image gray scale feature conforms to the judgment condition; obtaining a total score of the object to be detected based on the score of the object to be detected corresponding to each judgment condition; and judging whether the object to be detected is a living object or not based on the total score of the object to be detected.
Optionally, in the method according to the present invention, the face grayscale image is a near-infrared image.
According to another aspect of the present invention, there is provided a computing device comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods for detecting a living subject according to the present invention.
According to still another aspect of the present invention, there is provided a readable storage medium storing a program, the program including instructions that, when executed by a computing device, cause the computing device to perform any one of the methods for detecting a living object according to the present invention.
According to the scheme for detecting the living body object, the detection area intercepted from the human face gray level image is quantized to obtain the quantized image, the image characteristic is extracted according to the quantized image, the living body detection is realized based on the image characteristic, the detection accuracy is ensured, meanwhile, the complexity is reduced, the living body detection speed is improved, and the method has the effects of low cost, economy, applicability, high efficiency, reliability and no need of user cooperation.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 schematically illustrates a block diagram of a computing device 100; and
FIG. 2 schematically shows a flow diagram of a method 200 for detecting a living object according to an embodiment of the invention;
fig. 3A and 3B are schematic views respectively illustrating a face gray image and a detection area according to an embodiment of the present invention;
FIG. 4 schematically shows a schematic view of a contour line according to an embodiment of the invention;
FIG. 5 schematically illustrates a distance curve of the embodiment shown in FIG. 4; and
FIG. 6 illustrates a schematic diagram of a nasal gray scale relationship scatter plot, according to one embodiment of the present invention; and
fig. 7A and 7B are schematic diagrams illustrating quantized face grayscale images of a living object and a non-living object, respectively.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 exemplarily illustrates a block diagram of a computing device 100. The computing device 100 may be implemented as a server, such as a file server, a database server, an application server, a web server, and the like, or as a personal computer including desktop and notebook computer configurations. Moreover, computing device 100 may also be implemented as part of a small-form factor portable (or mobile) electronic device, such as a cellular telephone, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-browsing device, a personal headset device, an application-specific device, or a hybrid device that include any of the above functions.
In a basic configuration 102, computing device 100 typically includes system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: the processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. the example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more programs 122, and program data 124. In some implementations, the program 122 can be configured to execute instructions on an operating system by one or more processors 104 using program data 124.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
The one or more programs 122 of the computing device 100 include instructions for performing any of the methods for detecting a living subject in accordance with the present invention. Fig. 2 illustrates a flow chart of a method 200 for detecting a living object according to one embodiment of the invention.
As shown in fig. 2, the method 200 for detecting a living subject starts at step S210. In step S210, a face grayscale image of an object to be detected is received. Typically, the face grayscale image is a near-infrared image and is acquired at an image acquisition device. The face gray image generally has 256 gray levels (0-255). According to one embodiment of the invention, the computing device 100 may communicate with the image capture device via one or more communication ports 164 over the network communication link described above to obtain a grayscale image of a human face captured by the image capture device. The image capture device may generally include a near infrared light source, an optical lens, and an image sensor. Of course, according to another embodiment, the computing device 100 may also be implemented as an image capture device.
It will be understood that the images may each be represented as a matrix, the rows of the matrix corresponding to the height of the image (in pixels), the columns of the matrix corresponding to the width of the image (in pixels), the elements of the matrix corresponding to the pixels of the image, and the values of the elements of the matrix corresponding to the grey values of the pixels.
Subsequently, in step S220, a detection region is cut out from the received face grayscale image, wherein the detection region at least includes eyes. Specifically, existing eye positioning methods such as hough transform, integral projection, deformable template, principal component analysis, and symmetric transformation may be used to position the eyes in the gray-scale image of the human face, and then the detection region is intercepted according to the eye position. According to one embodiment, the intercepted detection region may be a region between a distance 1/4 times interocular distance above the eye to a distance 1 times interocular distance below the eye, where interocular distance is the horizontal distance between the pupils of both eyes, and may be calculated after locating the eye location. Fig. 3A and 3B are schematic views respectively illustrating a face gray image and a detection area according to an embodiment of the present invention.
After the detection area is cut out, the detection area is quantized to obtain a quantized image in step S230. A general quantization process may be as follows:
selecting n +1 gray values x in the original gray range to be quantized0,x1,...,xnAs quantization interval boundary, x0<x1<...<xnAnd n is a given positive integer. Then, n quantization intervals A can be obtained0,A1,A2,...,An-1,Ai=[xi,xi+1),i=0,...,n-2,An-1=[xn-1,xn]。
For any pixel in the original gray scale interval, if g ∈ AiI is 0,1, n-1, then q is xiWherein g is the original gray scale value of the pixel before quantization, and q is the gray scale value of the pixel after quantization.
Considering that the gray values of the pixels containing more information in the image are different according to the difference of the average gray values of the image, different quantization modes need to be adopted for the images with different average gray values. According to one embodiment of the present invention, an average gray-scale value of the detection region may be calculated, where the average gray-scale value of the detection region is the sum of the gray-scale values of all pixels of the detection region divided by the number of pixels of the detection region. If the calculated average gray value is in a predetermined average gray interval, quantizing the pixels in the detection area whose gray values are in a predetermined gray interval corresponding to the predetermined average gray interval, and quantizing the gray values into a predetermined number of gray levels, where the predetermined number is usually 2 or 3.
The predetermined average gray scale section may generally include one or more of a first average gray scale section, a second average gray scale section, a third average gray scale section, and a fourth gray scale section. Accordingly, the predetermined gray scale section may include one or more of a first gray scale section corresponding to the first average gray scale section, a second gray scale section corresponding to the second average gray scale section, a third gray scale section corresponding to the third average gray scale section, and a fourth gray scale section corresponding to the fourth average gray scale section.
Here, the first average gray scale interval may be [65, 100 ], and the first gray scale interval corresponding to the first average gray scale interval may be [100, 200), that is, when the average gray scale value of the detection region is [65, 100 ], the gray scale value of the pixel having the gray scale value in the detection region of [100, 200) may be quantized to 3 gray scale levels, and the gray scale value of the pixel of the entire detection region may be quantized to 5 gray scale levels.
The second average gray scale interval may be [100,145 ], and the second gray scale interval corresponding to the second average gray scale interval is [130, 230), that is, when the average gray scale value of the detection region is located at [100,145), the gray scale value of the pixel whose gray scale value is located at [130, 230) in the detection region may be quantized to 3 gray scale levels, and the gray scale value of the pixel of the entire detection region may be quantized to 5 gray scale levels.
The third average gray scale interval may be [145, 180 ], and the third gray scale interval corresponding to the third average gray scale interval may be [170, 255], that is, when the average gray scale value of the detection region is [145, 180 ], the gray scale value of the pixel having the gray scale value in the detection region of [170, 255] may be quantized to 3 gray scale levels, and the gray scale value of the pixel of the entire detection region may be quantized to 4 gray scale levels.
The fourth average gray scale interval may be [180,210 ], and the fourth gray scale interval corresponding to the fourth average gray scale interval may be [200, 255], that is, when the average gray scale value of the detection region is [180,210 ], the gray scale value of the pixel having the gray scale value of [200, 255] in the detection region may be quantized into 2 gray scale levels, and the gray scale value of the pixel of the entire detection region may be quantized into 3 gray scale levels.
In addition, the predetermined average gray scale interval may further include a fifth average gray scale interval and/or a sixth average gray scale interval, where the fifth average gray scale interval may be [0,65 ] and the sixth average gray scale interval may be [210,255 ]. If the calculated average gray value is in the fifth average gray level interval or the sixth average gray level interval, the living body detection is not performed on the detection area, and the quantization is not performed.
After the quantized image is obtained, in step S240, image features are extracted from the quantized image, and the image features may include image contour features and/or image gray scale features.
According to one embodiment of the present invention, the image contour features may be at least one pair of contour lines in a quantization region, and the extraction process is as follows:
at least one pair of contour lines is extracted according to the boundary of different gray scales in the quantized image, and each pair of contour lines comprises a left contour line and a right contour line. It is understood that the quantized image includes a left image and a right image, and the left image and the right image may be considered to be symmetrical based on the interocular perpendicular bisector based on the symmetrical characteristic of the human face.
Obviously, both the left and right images have been quantized to multiple gray levels. In the left image, a left contour line is formed at the intersection of two different gray levels. Accordingly, in the right image, the intersection of the two gray levels forms a right contour corresponding to the left contour, and the left contour and the right contour corresponding to the left contour form a pair of contours.
Specifically, in consideration of the boundary between two gray levels, one or more pixels corresponding to the boundary exist in the same row of pixels in both the left-side image and the right-side image. According to one embodiment of the present invention, in the left image, one or more pixels corresponding to the boundary in each row of pixels thereof may be determined, and the pixel farthest from the perpendicular bisector between the eyes may be extracted to form a left contour line. Accordingly, in the right image, one or more pixels corresponding to the boundary in each line of pixels thereof are determined, and the pixel farthest from the interocular perpendicular bisector is extracted to form a right contour line corresponding to the left contour line. The left contour line thus obtained and the right contour line corresponding to the left contour line constitute a pair of contour lines.
Fig. 4 illustrates a schematic diagram of a contour line according to an embodiment of the present invention. As shown in FIG. 4, 4 pairs of contour lines are extracted, each being { l }1,r1}、{l2,r2}、{l3,r3And { l } and4,r4denotes a left contour line and a right contour line, respectively.
After at least one pair of contour lines is extracted, for each pair of contour lines, a distance curve of the distance between the left contour line and the right contour line of the pair of contour lines, which changes along with the row direction of the quantized image, is calculated. The distance curve takes the line direction of the quantized image as a horizontal axis (the line number is in a positive direction from small to large), and the distance between the left contour line and the right contour line as a vertical axis. Therefore, the abscissa of the distance curve is the line number of each line of pixels of the quantized image.
Fig. 5 shows a schematic diagram of a distance curve of the embodiment shown in fig. 4. As shown in fig. 5, 4 distance curves are calculated from the extracted 4 pairs of profiles. And the longitudinal axis of the distance curve is the normalized distance between the left contour line and the right contour line.
According to an embodiment of the present invention, the image gray feature may include a gray distribution histogram of the quantized image and/or a gray projection curve of the quantized image in a horizontal direction, and the extracting the image gray feature from the quantized image may include: calculating a gray distribution histogram of the quantized image, and/or calculating a gray projection curve of the quantized image in a horizontal direction. In addition, if the average gray value of the detection region is in the general average gray value interval, the image gray feature may further include the average gray value of the nose or the eye. The average gray scale interval may typically be [65, 130 ].
After the above-described image features are extracted, in step S250, it is determined whether the object to be detected is a living object based on the acquired image features. Specifically, the computing device may store at least one determination condition in advance. According to an embodiment of the present invention, at least one of the following determination conditions may be stored:
determination condition 1: under the same abscissa, the ordinate of the distance curve of the pair of contour lines corresponding to the lower gray level boundary is smaller than the ordinate of the distance curve of the pair of contour lines corresponding to the higher gray level boundary;
determination condition 2: for eachA distance curve in a first abscissa domain U1There is a first minimum value, U1={x||x-a|<δ1A is the abscissa corresponding to the eye in the quantized image, delta1Is the zeroth multiple of the interocular distance, which may be 1/4;
determination condition 3: for each distance curve, there is at least one local maximum for that distance curve;
determination condition 4: for each distance curve, the distance curve is in a second abscissa domain U2Has a second maximum value, and the difference between the abscissa corresponding to the second maximum value and the first minimum value is not more than the first multiple of the distance between the two eyes, U2X | x > a, the first multiple may be 0.4;
determination condition 5: for each distance curve, the first minimum value is greater than a second multiple of the interocular distance and less than a third multiple of the interocular distance, the second multiple may be 0.3, and the third multiple may be 1;
determination condition 6: for each distance curve, the second maximum is not greater than a fourth multiple of the interocular distance and not less than a fifth multiple of the interocular distance, the fourth multiple may be 2, and the fifth multiple may be 1;
determination condition 7: for each distance curve, the difference between the second maximum value and the first minimum value is greater than a sixth multiple of the interocular distance, and the sixth multiple may be 0.6;
determination condition 8: for each distance curve, in a third abscissa domain U3The slope of the distance curve varies monotonically within a predetermined slope range, U3={x|b-δ3X < b, b is the abscissa, delta, corresponding to the second maximum3Is a seventh multiple of the interocular distance, the seventh multiple may be 1/4, and the predetermined slope may range from 0 ° to 60 °;
determination condition 9: for each distance curve, in the fourth abscissa domain U4The slope of the distance curve varies monotonically within a predetermined slope range, U4={x|b<x<b+δ4},δ4An eighth multiple of the interocular distance, which may be 1/4, the predetermined slope rangeCan be 120-180 degrees;
determination condition 10: for each distance curve, in the fifth abscissa domain U5The slope of the distance curve lies in a second predetermined slope range, U5={x||x-c|<δ5C is the abscissa corresponding to the first minimum value, δ5A ninth multiple of the interocular distance, the ninth multiple may be 1/4, and the second predetermined slope may range from 70 ° to 110 °;
determination condition 11: for every two distance curves, in the sixth abscissa domain U6The correlation of the two distance curves is greater than a predetermined correlation value, U6={x|c-δ6<x<b},δ6A tenth multiple of the interocular distance, the tenth multiple may be 1/4, and the predetermined correlation value may be 0.4;
determination condition 12: the gray distribution histogram of the quantized image is continuous, wherein continuous means that the gray levels are continuous and the loss cannot exist;
determination condition 13: a correlation coefficient between the slope of the gray level projection curve of the quantized image in the horizontal direction and the slope of a curve obtained by turning the gray level projection curve is greater than a predetermined coefficient, and the predetermined coefficient can be 0.4;
determination condition 14: if the average gray scale value of the detection region is in a general average gray scale interval, for example [65, 130], the relationship between the average gray scale value of the nose or the eye and the average gray scale value of the detection region satisfies a predetermined nose or eye gray scale relationship curve. The nose or eye gray scale relation curve comprises a first nose or eye gray scale relation curve and a second nose or eye gray scale relation curve, wherein the first nose or eye gray scale relation curve and the second nose or eye gray scale relation curve are obtained based on a nose or eye gray scale relation scatter diagram, and the average gray scale value of the detection area is taken as a horizontal axis, and the average gray scale value of the nose or eye is taken as a vertical axis. The nasal or eye gray scale relation scatter diagram can be obtained through counting human face gray scale images of a plurality of living body objects. FIG. 6 is a schematic diagram illustrating a nasal gray scale relationship scatterplot according to one embodiment of the present invention.
The first nasal or ocular gray scale relationship curve is such that more than 95% of the data points in the nasal or ocular gray scale relationship scattergram are below the first nasal or ocular gray scale relationship curve, and the second nasal or ocular gray scale relationship curve is such that more than 95% of the data points in the nasal or ocular gray scale relationship scattergram are above the second nasal or ocular gray scale relationship curve.
And if the data point corresponding to the average gray value of the nose or the eye and the average gray value of the detection area is located above the first nose or eye gray relation curve or below the second nose or eye gray relation curve, determining that the nose or eye gray relation curve is not satisfied, otherwise, determining that the nose or eye gray relation curve is satisfied.
Each parameter (zeroth multiple to tenth multiple, predetermined slope range, second predetermined slope range, predetermined correlation value, predetermined coefficient, etc.) in the above determination conditions may be set based on the illumination condition and the device parameter for acquiring the face grayscale image.
For each determination condition stored in advance, the determination of the determination condition may be made based on an image contour feature or an image gradation feature, that is, whether the determination condition is satisfied is determined according to the image contour feature or the image gradation feature. For example, the above-mentioned determination conditions 1 to 11 may be determined based on image contour features, and the above-mentioned determination conditions 12 to 14 may be determined based on image gradation features.
Then, for each determination condition stored in advance, a score of the object to be detected corresponding to the determination condition may be obtained based on whether the determination condition is met.
According to one embodiment of the present invention, for one decision condition, scores of four grades may be given according to clear agreement, disagreement, clear disagreement. Wherein the apparent non-compliance gives a score s1Not matching given a score s2Coincidence gives a score s3Obvious coincidence giving a score s4. Wherein s is1<s2<s3<s4Or further, s1<s2<0<s3<s4. For example, for decision condition 7: for each distance curve, the second maximum and the first maximumThe difference between the small values is greater than the sixth multiple of the interocular distance. Assuming that the difference between the second maximum and the first minimum is S and the sixth multiple of the interocular distance is Thr, the score corresponding to the determination condition for each distance curve is:
Figure GDA0002675379680000121
wherein s is1<s2<0<s3<s4,∞<level1<0<leve2<And f, infinity. Let Thr be 10, level1=-5,leve2=5,s1=-2,s2=-1,s3=1,s42. Then, if the difference S between the obtained second maximum value and the first minimum value is 4, obtaining a score of-2; if the difference S between the second maximum value and the first minimum value is 9, obtaining a score of-1; if the difference S between the second maximum value and the first minimum value is 14, obtaining a score of 1; if the difference S between the second maximum and the first minimum is 16, a score of 2 is obtained.
Of course, for one judgment condition, scores of two grades can be given according to the conformity and the non-conformity, and the scoring mode of each judgment condition is not limited by the invention.
After the score of the object to be detected corresponding to each determination condition is obtained, the total score of the object to be detected can be obtained based on the score of the object to be detected corresponding to each determination condition. For example, the scores corresponding to each determination condition may be summed or weighted to obtain a total score of the object to be detected.
And finally, judging whether the object to be detected is a living object or not based on the total score of the object to be detected. Specifically, if the total score of the object to be detected is greater than a preset score threshold, it may be determined that the object to be detected is a living object, otherwise, it is determined that the object to be detected is not a living object.
In addition, according to an embodiment of the present invention, a plurality of frames of face grayscale images of an object to be detected may be received, a score of each face grayscale image is obtained, a total score of the object to be detected is obtained based on the score of each face grayscale image, and finally, whether the object to be detected is a living object is determined according to the total score based on the object to be detected.
The average value of the scores of each face gray image can be obtained to obtain the total score of the object to be detected. The total score of the object to be detected can also be obtained by performing weighted summation on the score of each face gray image, and the weight corresponding to the score of each face gray image can be the probability that the object to be detected is a living object under the score.
It is understood that the texture (i.e., contour) of the face can be represented by the gray distribution of the pixels of the face gray image and their spatial neighborhood. For the face gray level image without the background, the naked eye can hardly distinguish the real face or the photo face, but the imaging process of the face gray level image has great difference. The real human face is a complex three-dimensional object, the photo human face is a plane object, different illumination reflection and shadows can be generated in the imaging process, the difference of surface attributes is caused, and the difference can be well detected by using textures. The gray level image of the face of the living body object can better highlight the texture after quantization, and the gray level image of the face of the non-living body object cannot.
Fig. 7A and 7B are schematic diagrams illustrating quantized face grayscale images of a living object and a non-living object, respectively. Clearly, the texture shown in FIG. 7A is quite clear, while FIG. 7B is blurred.
Therefore, the scheme for detecting the living body object can effectively realize the living body detection based on the image characteristics of the quantized image, reduces the complexity while ensuring the detection accuracy, improves the living body detection speed, and has the effects of low cost, economy, applicability, high efficiency, reliability and no need of user cooperation.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the various methods of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
The present invention may further comprise: a8, the method as in a7, wherein the step of extracting image gray scale features from the quantized image further comprises: and if the average gray value of the detection area is in a common average gray value interval, calculating the average gray value of the nose or the eyes. A9, the method of A8, wherein the general average gray scale interval is [65, 130 ]. A11, the method of a7, wherein the step of determining whether the object to be detected is a living object based on the acquired image features includes: based on the image gray scale feature, at least one of the following judgment conditions is judged: the gray distribution histogram of the quantized image is continuous; and the correlation coefficient between the slope of the gray level projection curve of the quantized image in the horizontal direction and the slope of the curve obtained after the gray level projection curve is turned over is larger than a preset coefficient. The method of a12, as set forth in a8, wherein, when the average grayscale value of the detection region is in the general average grayscale interval, the following determination conditions are determined based on the image grayscale feature: and the relation between the average gray value of the nose or the eye and the average gray value of the detection area meets a preset nose or eye gray relation curve. A13, the method according to any one of a10-12, wherein the step of determining whether the object to be detected is a living object based on the acquired image features comprises: for each judgment condition, obtaining a score of the object to be detected corresponding to the judgment condition based on whether the image contour feature or the image gray scale feature meets the judgment condition; obtaining a total score of the object to be detected based on the score of the object to be detected corresponding to each judgment condition; and judging whether the object to be detected is a living object or not based on the total score of the object to be detected. A14, the method of any one of A1-13, wherein the face grayscale image is a near-infrared image.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (15)

1. A method for detecting a living object, adapted to be executed in a computing device, the method comprising the steps of:
receiving a face gray image of an object to be detected;
intercepting a detection area from the face gray level image, wherein the detection area at least comprises eyes;
quantizing the detection area to obtain a quantized image;
extracting image features from the quantized image, the image features including image contour features;
judging whether the object to be detected is a living object or not based on the acquired image characteristics; wherein
Extracting image contour features from the quantized image, comprising:
extracting at least one pair of contour lines according to the boundary of different gray levels in the quantized image, wherein each pair of contour lines comprises a left contour line and a right contour line;
for each pair of contour lines, a distance curve is calculated in which the distance between the left contour line and the right contour line varies with the line direction of the quantized image.
2. The method of claim 1, wherein the step of quantizing the detection region to obtain a quantized image comprises:
calculating the average gray value of the detection area;
and if the average gray value is in a preset average gray interval, quantizing the pixels of which the gray values are in the preset gray interval corresponding to the preset average gray interval in the detection area, and quantizing the gray values into a preset number of gray levels.
3. The method of claim 2, wherein the predetermined average gray scale interval comprises one or more of a first average gray scale interval, a second average gray scale interval, a third average gray scale interval, and a fourth average gray scale interval, and the predetermined gray scale interval comprises one or more of a first gray scale interval corresponding to the first average gray scale interval, a second gray scale interval corresponding to the second average gray scale interval, a third gray scale interval corresponding to the third average gray scale interval, and a fourth gray scale interval corresponding to the fourth average gray scale interval.
4. The method of claim 3, wherein the first average gray scale interval is [65, 100 ], the second average gray scale interval is [100,145 ], the third average gray scale interval is [145, 180 ], and the fourth average gray scale interval is [180, 210);
a first gray scale interval corresponding to the first average gray scale interval is [100, 200 ], a second gray scale interval corresponding to the second average gray scale interval is [130, 230 ], a third gray scale interval corresponding to the third average gray scale interval is [170, 255], and a fourth gray scale interval corresponding to the fourth average gray scale interval is [200, 255 ].
5. The method of claim 1, wherein the quantized image comprises a left image and a right image, and the extracting at least one pair of contour lines based on boundaries of different gray levels in the quantized image comprises:
for each boundary of two gray levels,
and respectively determining one or more pixels corresponding to the junction in each line of pixels in the left side image and the right side image, and extracting the pixel farthest from the perpendicular bisector between the eyes to respectively form the left contour line and the right contour line of the pair of contour lines.
6. The method of claim 1, wherein the image features comprise image grayscale features, and extracting image grayscale features from the quantized image comprises:
calculating a gray distribution histogram of the quantized image; and
and calculating a gray level projection curve of the quantized image in the horizontal direction.
7. The method of claim 6, wherein the extracting image grayscale features from the quantized image further comprises:
and if the average gray value of the detection area is in a common average gray value interval, calculating the average gray value of the nose or the eyes.
8. The method of claim 7, wherein the general average gray scale interval is [65, 130 ].
9. The method according to claim 1, wherein the step of determining whether the object to be detected is a living object based on the acquired image features comprises:
based on the image contour features, at least one of the following determination conditions is determined, wherein the distance curve takes the line direction of the quantized image as a horizontal axis and the distance between a left contour line and a right contour line as a vertical axis:
under the same abscissa, the ordinate of the distance curve of the pair of contour lines corresponding to the lower gray level boundary is smaller than the ordinate of the distance curve of the pair of contour lines corresponding to the higher gray level boundary;
for each distance curve, the distance curve is in a first abscissa domain U1There is a first minimum value, U1={x||x-a|<δ1A is the abscissa corresponding to the eye in the quantized image, delta1Is the zeroth multiple of the interocular distance;
for each distance curve, there is at least one local maximum for that distance curve;
for each distance curve, the distance curve is in a second abscissa domain U2Has a second maximum value, and the difference between the abscissa corresponding to the second maximum value and the first minimum value is not more than the first multiple of the distance between two eyes, U2={x|x>a};
For each distance curve, the first minimum value is greater than a second multiple of the interocular distance and less than a third multiple of the interocular distance;
for each distance curve, the second maximum is not greater than a fourth multiple of the interocular distance and not less than a fifth multiple of the interocular distance;
for each distance curve, the difference between the second maximum and the first minimum is greater than a sixth multiple of the interocular distance;
for each distance curve, in a third abscissa domain U3The slope of the distance curve varies monotonically within a predetermined slope range, U3={x|b-δ3X < b, b is the abscissa corresponding to the second maximum, δ3Is the seventh multiple of the interocular distance;
for each distance curve, in the fourth abscissa domain U4The slope of the distance curve varies monotonically within said predetermined slope range, U4={x|b<x<b+δ4},δ4Is the eighth multiple of the interocular distance;
for each distance curve, in the fifth abscissa domain U5The slope of the distance curve lies in a second predetermined slope range, U5={x||x-c|<δ5C is the abscissa corresponding to the first minimum value, δ5Is the ninth multiple of the interocular distance;
for every two distance curves, in the sixth abscissa domain U6The correlation of the two distance curves is greater than a predetermined correlation value, U6={x|c-δ6<x<b},δ6Is a tenth multiple of the interocular distance.
10. The method of claim 6, wherein the determining whether the object to be detected is a living object based on the acquired image features comprises:
based on the image gray scale feature, at least one of the following judgment conditions is judged:
the gray distribution histogram of the quantized image is continuous;
and the correlation coefficient between the slope of the gray level projection curve of the quantized image in the horizontal direction and the slope of the curve obtained after the gray level projection curve is turned over is larger than a preset coefficient.
11. The method according to claim 7, wherein if the average grayscale value of the detection region is in the general average grayscale interval, based on the image grayscale characteristics, the following determination conditions are determined:
and the relation between the average gray value of the nose or the eye and the average gray value of the detection area meets a preset nose or eye gray relation curve.
12. The method according to claim 10, wherein the step of determining whether the object to be detected is a living object based on the acquired image features comprises:
for each judgment condition, obtaining a score of the object to be detected corresponding to the judgment condition based on whether the image contour feature or the image gray scale feature meets the judgment condition;
obtaining a total score of the object to be detected based on the score of the object to be detected corresponding to each judgment condition;
and judging whether the object to be detected is a living object or not based on the total score of the object to be detected.
13. The method of any one of claims 1-12, wherein the face grayscale image is a near-infrared image.
14. A computing device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method for detecting a living object of any of claims 1-13.
15. A readable storage medium storing a program, the program comprising instructions that, when executed by a computing device, cause the computing device to perform the method for detecting a living subject of any of claims 1-13.
CN201810510901.3A 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium Active CN108764121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810510901.3A CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810510901.3A CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Publications (2)

Publication Number Publication Date
CN108764121A CN108764121A (en) 2018-11-06
CN108764121B true CN108764121B (en) 2021-03-02

Family

ID=64005991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810510901.3A Active CN108764121B (en) 2018-05-24 2018-05-24 Method for detecting living object, computing device and readable storage medium

Country Status (1)

Country Link
CN (1) CN108764121B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160235A (en) * 2019-12-27 2020-05-15 联想(北京)有限公司 Living body detection method and device and electronic equipment
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687969A (en) * 2005-05-12 2005-10-26 北京航空航天大学 File image compressing method based on file image content analyzing and characteristic extracting
CN101334895A (en) * 2008-08-07 2008-12-31 清华大学 Image division method aiming at dynamically intensified mammary gland magnetic resonance image sequence
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767358B2 (en) * 2014-10-22 2017-09-19 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
US10157323B2 (en) * 2016-08-30 2018-12-18 Qualcomm Incorporated Device to provide a spoofing or no spoofing indication

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687969A (en) * 2005-05-12 2005-10-26 北京航空航天大学 File image compressing method based on file image content analyzing and characteristic extracting
CN101334895A (en) * 2008-08-07 2008-12-31 清华大学 Image division method aiming at dynamically intensified mammary gland magnetic resonance image sequence
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN106355139A (en) * 2016-08-22 2017-01-25 厦门中控生物识别信息技术有限公司 Facial anti-fake method and device

Also Published As

Publication number Publication date
CN108764121A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN106778586B (en) Off-line handwritten signature identification method and system
US8184915B2 (en) Device and method for fast computation of region based image features
WO2018086543A1 (en) Living body identification method, identity authentication method, terminal, server and storage medium
US20190362193A1 (en) Eyeglass positioning method, apparatus and storage medium
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
CN110728234A (en) Driver face recognition method, system, device and medium
WO2015067084A1 (en) Human eye positioning method and apparatus
EP2370931B1 (en) Method, apparatus and computer program product for providing an orientation independent face detector
CN109859217B (en) Segmentation method and computing device for pore region in face image
WO2017161636A1 (en) Fingerprint-based terminal payment method and device
CN112200136A (en) Certificate authenticity identification method and device, computer readable medium and electronic equipment
US11670069B2 (en) System and method for face spoofing attack detection
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
CN108764121B (en) Method for detecting living object, computing device and readable storage medium
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
CN113111880A (en) Certificate image correction method and device, electronic equipment and storage medium
CN111027637A (en) Character detection method and computer readable storage medium
US8787625B2 (en) Use of relatively permanent pigmented or vascular skin mark patterns in images for personal identification
CN106980818B (en) Personalized preprocessing method, system and terminal for face image
CN112613471A (en) Face living body detection method and device and computer readable storage medium
CN112541899B (en) Incomplete detection method and device of certificate, electronic equipment and computer storage medium
CN108875467B (en) Living body detection method, living body detection device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211220

Address after: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.

Address before: 201207 2 / F, building 13, 27 Xinjinqiao Road, Pudong New Area pilot Free Trade Zone, Shanghai

Patentee before: SHIMA RONGHE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230809

Address after: C203, 205, 206, 2nd floor, Building 106 Lize Zhongyuan, Chaoyang District, Beijing, 100000

Patentee after: EYESMART TECHNOLOGY Ltd.

Address before: 541000 building D2, HUTANG headquarters economic Park, Guimo Avenue, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Patentee before: Guangxi Code Interpretation Intelligent Information Technology Co.,Ltd.

TR01 Transfer of patent right