NZ733764B2 - Process and system for video spoof detection based on liveness evaluation - Google Patents
Process and system for video spoof detection based on liveness evaluation Download PDFInfo
- Publication number
- NZ733764B2 NZ733764B2 NZ733764A NZ73376415A NZ733764B2 NZ 733764 B2 NZ733764 B2 NZ 733764B2 NZ 733764 A NZ733764 A NZ 733764A NZ 73376415 A NZ73376415 A NZ 73376415A NZ 733764 B2 NZ733764 B2 NZ 733764B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- laveryk
- video
- annotation
- motion intensity
- frame
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001514 detection method Methods 0.000 title description 9
- 238000011156 evaluation Methods 0.000 title description 2
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 210000000554 Iris Anatomy 0.000 claims description 8
- 230000000875 corresponding Effects 0.000 claims description 7
- 230000002123 temporal effect Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 230000000193 eyeblink Effects 0.000 description 3
- 241001527806 Iti Species 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 150000002500 ions Chemical class 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 210000004204 Blood Vessels Anatomy 0.000 description 1
- 240000004991 German iris Species 0.000 description 1
- 229910000572 Lithium Nickel Cobalt Manganese Oxide (NCM) Inorganic materials 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000001955 cumulated Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001815 facial Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001131 transforming Effects 0.000 description 1
Abstract
The invention presents a process for determining a video as being a spoof or a genuine recording of a live biometric characteristic, characterized in that it comprises the steps of: preprocessing (200) the video, determining a liveness score (300) of the video, said determination comprising, for each frame of a plurality of frames (j) of the video: - computing a difference between a motion intensity of a current frame and that of each frame of a set of preceding frames to infer a differential motion intensity of the current frame, - inferring (330), from the differential motion intensities of the plurality of frames, a motion intensity of the video, - comparing (340) said motion intensity to a predetermined threshold, and assigning a liveness score to the video, and determining (400) whether the video is a genuine recording of a biometric characteristic or a spoof. h frame of a plurality of frames (j) of the video: - computing a difference between a motion intensity of a current frame and that of each frame of a set of preceding frames to infer a differential motion intensity of the current frame, - inferring (330), from the differential motion intensities of the plurality of frames, a motion intensity of the video, - comparing (340) said motion intensity to a predetermined threshold, and assigning a liveness score to the video, and determining (400) whether the video is a genuine recording of a biometric characteristic or a spoof.
Description
PROCESS AND SYSTEM FOR VIDEO SPOOF DETECTION BASED ON
LIVENESS EVALUATION
TECHNICAL FIELD OF THE INVENTION
The invention relates to the field of processes for discriminating between
genuine and spoof videos of biometric characteristics, and applies in particular to
the detection of spoof of the replay video type.
The invention can be implemented in a context of identification or
authentication for giving access to a secure zone or allowing a secure transaction.
BACKGROUND ART
Numerous controls rely on authentication or identification based on biometric
characteristics, for instance to allow an individual to access a secure place or to
proceed with a secure transaction.
Some ls are implemented by recording a video of a biometric
characteristic (e.g. iris) of an individual, and comparing the extracted biometric
features to a database of recorded individuals to find a match.
To fool such a control, attacks have been developed in which an imposter
uses a stolen ric sample from a database to gain access to the secured zone
or is d to perform the transaction.
These attacks (i.e. presentation attacks) can take the form of a print attack,
in which a biometric sample is printed using high quality laserjet or inkjet printer, and
used during the control process. Therefore during the control, a video of the print,
d of the genuine biometric characteristic, is recorded.
Attacks can also take the form of a replay video attack, in which a high
quality electronic video displaying a biometric sample is used during the l
process. In that case, the control system s a new video of the displayed video
instead of the biometric characteristic.
3O A control system should be able to detect such presentation attacks in order
to be secure and le, and thus should be able to determine the liveness of the
t of the recording (either a genuine biometric characteristic, or a replay video,
or a print of the biometric sample).
[Annotation] k
None set by laveryk
ation] laveryk
MigrationNone set by k
[Annotation] laveryk
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
ation] laveryk
Unmarked set by k
A s for detecting presentation attacks on face recognition capture
devices has been proposed in the nt of S. Bharadwaj, T.l. Dhamecha, M.
Vatsa and R. Singh, Computational/y efficient face g detection with motion
magnification, in 2013 IEEE Conference on Computer Vision and Pattern
Recognition Workshops, pages 105-110, IEEE, 2013.
This process relies on using magnitude of motion magnification and texture
descriptors, and is therefore not adapted for other kinds of biometric characteristics
such as, for ce, irises that are relied upon on replay video attacks.
Therefore there is a need for a process allowing detection of replay video
attacks in iris recognition.
SUMMARY OF THE INVENTION
One aim of the invention is to provide a process for determining a video as
being a spoof or a genuine record of a biometric characteristic, no matter which type
of biometric characteristic is the subject of the video.
Accordingly, an object of the invention is a process for determining a video of a
biometric characteristic as being a spoof or a genuine recording of a live biometric
characteristic, wherein the video comprises a temporal sequence of frames, the
process being implemented in a system comprising a processing unit,
characterized in that it comprises the steps of:
- preprocessing the video, said preprocessing comprising aligning the
biometric characteristic on each frame of the video,
- determining a liveness score of the video, said determination comprising, for
each frame of a plurality of frames:
0 ing a ence between a motion intensity of a current frame
and a motion intensity of each frame of a set of preceding frames,
0 inferring, from said differences, a differential motion intensity of the
current frame,
0 ing from the differential motion intensities of the plurality of
frames a motion intensity of the video,
0 comparing said motion intensity to a ermined threshold, and
assigning a liveness score to the video, depending on the
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
ionNone set by laveryk
[Annotation] laveryk
ed set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] laveryk
Unmarked set by laveryk
comparison of the motion intensity of the video with the threshold,
according to the liveness scores of the video, determining whether the video
is a e recording of a biometric teristic or a spoof.
In some embodiments, the process can comprise at least one of the
following features: The motion intensity of a frame comprises phase variation
of the frame, and motion intensity of the video comprises phase variation of
the video.
The motion ity of the video is inferred from differential motion
intensities of at least ten frames of the video.
The set of frames for computation of the motion intensity ence with the
current frame comprises from three to eight frames, and preferably five
frames preceding the current frame.
Each frame is divided into non-overlapping blocks of constant size, and the
computation of the difference in motion intensity between two frames is
carried out by computing, for every block in the current frame, the difference
n a motion intensity of the block and a motion intensity of a
corresponding block of each of the set of previous frames, and the
differential motion intensity of the current frame is obtained by :
o electing, for each block of the current frame, the maximum motion
ity ence between said block and the corresponding block
of each of the set of previous frames, and
o summing the maximum motion ities of all the blocks in the
frame.
the preprocessing step further comprises a step of magnifying the motion in
the video.
The motion magnification is a phase-variation magnification and comprises:
0 decomposing the video using a Fast Fourier Transform,
o applying a bandpass filter to the phase,
0 applying an amplification factor to at least some phase ents,
o implementing an inverse Fourier Transform to reconstruct the video.
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] laveryk
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
ionNone set by laveryk
[Annotation] laveryk
ed set by laveryk
- The biometric characteristic on the video is an iris, and the preprocessing of
the video comprises selecting (230) frames of the video to remove blinks of
the eye.
- The process further comprises ing the motion intensity of the video to
a second threshold, and the step of assigning a liveness score to the video
s on the ison of the motion intensity with the two olds.
- The differential motion intensity of each frame is further normalized by
applying a sigmoid function to it.
- The motion intensity of the video is the mean of the normalized ential
ities, and said motion intensity is compared to the first threshold and, if
it exceeds said threshold, the video is considered as being a spoof of the
replay video type, and the liveness score assigned to the frame is 0.
- The second threshold is lower than the first threshold, the motion intensity of
the video is ed to the second threshold, and, if below said threshold,
the video is considered as being a spoof of the print type and the liveness
score assigned to the frame is 0.
Another objet of the invention is a computer program product, comprising software
code adapted to perform the process according to the preceding introduction when
implemented by a processing unit.
Another object of the invention relates to a processing unit that it is configured to
implement the process according to the preceding introduction.
Another object of the invention relates to an authentication or fication system,
comprising a processing unit adapted to implement the above-mentioned process
and a video camera adapted to acquire a video of a ric characteristic, and to
transfer said video to the processing unit.
The process according to the invention makes it possible to differentiate
between a replay video attack and a genuine recording of a biometric characteristic,
by assessing a motion intensity in the video.
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
ionNone set by laveryk
[Annotation] laveryk
ed set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
ation] laveryk
Unmarked set by laveryk
Indeed, the replay video attack has more frequency components than in a
true ing of a biometric characteristic. Therefore if a motion intensity in a video
is more important than average, the video can be considered as a spoof video.
The process is ularly robust when the assessed motion intensity of the
video is a phase variation of the video.
Furthermore, the decomposition of the video frames into blocks makes the
process more robust.
BRIEF DESCRIPTION OF THE GS
The features and advantages of the invention will be apparent from the
following more detailed description of certain embodiments of the invention and as
illustrated in the accompanying gs, in which:
- Figure 1 schematically shows an authentication or identification system
adapted to perform a process according to an embodiment of the invention.
- Figure 2a schematically shows the main steps of the process according to an
embodiment of the invention.
- Figure 2b schematically shows the computation steps of a frame differential
phase information.
- Figure 2c schematically shows the implementation of the step of computing
differential phase information of a frame.
DETAILED DESCRIPTION OF AT LEAST AN EMBODIMENT OF THE
INVENTION
System for authentication or identification
With nce to figure 1, a system 1 for tication or identification of an
individual is shown. This system is preferably used for controlling access , for
instance of individuals wishing to enter a secured zone.
The system 1 comprises a video camera 11 and a processing unit 12,
connected, either by a wire or wirelessly, to the video camera in order to receive
video recordings from said camera. In an embodiment, the system is integrated in a
portable device such as a smartphone, a tablet or the like. In another embodiment,
the video camera can be remote from the processing unit, for instance in an
installation where the camera is at the entrance of a secured zone and the
g unit is in a separate room.
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by k
ation] laveryk
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by k
[Annotation] laveryk
Unmarked set by laveryk
The video camera 11 is acquired to record a video recording of a biometric
characteristic, however it needs not be of very high tion. For instance a video
camera of a smartphone usually has a sufficient resolution for carrying out the
following process.
In the following, a video is considered to be a al sequence of a
plurality of N frames of equal size (number of pixels).
The processing unit 12 has computational means and is adapted to perform
the process disclosed hereinafter, through implementation of proper software code.
As explained before, the system 1 is supposed to record a video of a live
biometric characteristic of an individual, but may undergo an attack, for instance of
the replay video type, in which the video camera 11 records a video of a display of a
video of a biometric teristic, or of the print attack type, in which the video
camera 11 records a video of an image of a biometric characteristic printed on a
support medium (i.e. high quality paper).
Spoof detection s
With reference to figure 2a, a process for determining whether a video is a
spoof or a genuine recording of a live biometric characteristic will be described.
The first step 100 is the loading, by the processing unit 12, of a video to
assess. This video may have been recorded by the camera 11 and either
itted directly to the unit 12 or stored in a database or memory of the system
for latter loading.
Pre-Qrocessing
The process then comprises a pre-processing step 200 of the video.
The ocessing comprises aligning 210 the ric sample on each
frame of the video, if the object that has been recorded has moved during the
record.
Furthermore, each frame may also be reframed 220 in order to only keep on
the frame the region of the biometric characteristic.
The biometric characteristic can be of various types: iris, shape of the face,
pattern of blood vessels, etc., however, the process is preferably d out on
irises.
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] k
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] laveryk
Unmarked set by laveryk
If the recorded biometric characteristic is an iris, then the preprocessing also
comprises removing 230 frames corresponding to the blink of the eye, in order to
keep only those frames in which the eye is fully open and the iris pattern is visible.
The blink removal can be done manually. Alternatively, blinks can be
automatically detected and removed, for instance implementing the method
disclosed in :
- Jiang-Wei Li, "Eye blink detection based on multiple Gabor response
waves," Machine ng and Cybernetics, 2008 International Conference
on 30 no., pp.2852,2856, 12-15 July 2008,
, vol.5,
- Inho Choi; Seungchul Han; Daijin Kim, "Eye Detection and Eye Blink
Detection Using AdaBoost Learning and Grouping," Computer
Communications and Networks (ICCCN), 2011 Proceedings of 20th
International ence on 31 2011-Aug. 4 2011,
, vol., no., pp.1,4, July
- Lee, Won Oh, Eui Chul Lee, and Kang Ryoung Park. "Blink ion robust
to various facial “ Journal of neuroscience methods 193.2 (2010): 356-
372.
The video may contain, after l of the frames where the eye blinks,
about thirty or more frames. ably, the video comprises at least fifteen ,
in order to properly carry out the der of the process.
Optionally, but preferably, the preprocessing may also comprise an
additional step 240 of magnifying the motion in each frame of the video.
This step may preferably be implemented by magnifying phase variation in
the video. To this end, frequency transformation techniques such as Fast Fourier
Transform may be implemented to decompose the video and separate the
amplitude from the phase.
Then, a bandpass filter is applied to the phase to remove any temporal DC
component. The bandpass filter is ably a ButtenNorth banpass filter. The
temporally bandpassed phases correspond to the motion in a frame. The phased
are then multiplied by an amplification factor in order to be amplified.
The video is then reconstructed by using an inverse r transform, and
thus the motion enhancement is done.
In other though less red embodiments, motion magnification may be
carried out by enting the Phase-Based Eulerian video magnification as
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
ation] laveryk
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] k
Unmarked set by laveryk
disclosed by N. Wadhwa and al. in ”Phase-Based Video Motion sing”, ACM
Transactions on Graphics, 32(4):80, 2013.
Alternatively, Eulerian Video Magnification, as disclosed by H.-Y. Wu et al in
“Eulerian video magnification for revealing subtle changes in the world”. ACM
Transactions on Graphics, 31(4):65, 2012, may also be implemented in order to
carry out motion magnification.
Other methods may also be implemented such as Lagrangian Motion
Magnification, or the methods sed by:
- Liu C., Torralba A., n W. T., Durand F., and Adelson EH, 2005.
n magnification.“ ACM Trans. Graph. 24, 519—526, or
- WANG, J., DRUCKER, S. M., AGRAWALA, M., AND COHEN, M. F. 2006.
“The cartoon ion filter.” ACM Trans. Graph. 25, 1169—1173.
Determination of a liveness score of the video
Once the video has been pre—processed, the process further ses a
step 300 of determining, a liveness score of the video.
Preferably, in order to implement this step, each frame of the video may be
downscaled 310 to a smaller size. For instance, frames may be downscaled to
100*100 pixels. This allows a quicker processing of the video.
Moreover, for each downscaled frame, motion intensity, which is preferably
phase variation between the frames, is normalized to have its values in the range of
0 to 1. Let F be the motion component (for instance the magnified phase variation)
of the jth frame of the video. F is the sum of the differences between the pixels of a
frame and the corresponding pixels of the previous frame. For instance, in the case
where the frames are downscaled to 100*100 pixels:
100 100
mw—Hmw
9521 3121
Where ll- is a jth frame.
The normalized motion ent NorF(j) is given by:
F (1')
NorFQ)' =—,
mnwo»
Where j=1:N.
3O l3
ation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
[Annotation] laveryk
Unmarked set by laveryk
ation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
ation] k
Unmarked set by laveryk
Then, for a plurality of frames j, a differential motion intensity 320 of the
frame is determined. In the case where motion magnification comprises
magnification of the motion intensity, the differential motion intensity is called
differential phase variation of a frame. This step will be more detailed with reference
to figure 2b.
Preferably, the differential motion intensity is computed for at least six frames
of the video, and even more preferably for at least ten or eleven frames of the video,
as the decision based on the differential motion intensity is more robust with an
increasing number or frames, and particularly from the eleventh frame.
The determination of differential motion intensity of a frame j comprises
computing, for the current frame j, the difference n a motion intensity of the
frame j, and a motion intensity of each of a set of frames preceding frame j.
Preferably, as shown in figure 2c, the set of frames comprises between three
and eight frames, and ably five frames.
Therefore, in a preferred embodiment, this step is carried out for each framej
from the fifth. The differential motion intensity is computed using a sliding window
approach with a window size of five frames (cardinal of the set of frames) and
increment by one frame to detect the rate of the change, for instance of phase, with
respect to time, that is, for a current frame NorF(j), using the five previous frames
NorF(j — 1) to NorF(j — 5).
The computation 320 of the differential motion intensity of the frame is as
In an ment, each frame is first divided 321 into non-overlapping
blocks of a specific and constant size, bx*by, as shown for instance in figure 2a.
Following the above example, the block size may be 20*20 , resulting in a
number k of blocks equal to 25.
The motion intensity of a block is further referred as block motion intensity
and represented as j)k. In a red embodiment, the motion intensity of a
block is the normalized block phase variation.
Then, a ence 322 in motion intensity between the current frame and
each of the set of previous frames is determined by computing, for every block in the
current frame, the difference between a motion intensity of the block and a motion
intensity of a corresponding block of each of the set of previous frames.
[Annotation] laveryk
None set by k
[Annotation] laveryk
MigrationNone set by laveryk
ation] laveryk
Unmarked set by laveryk
[Annotation] k
None set by laveryk
[Annotation] laveryk
ionNone set by k
[Annotation] laveryk
Unmarked set by laveryk
For a particular block k, the differential intensity between the current frame
and the set of us frames is given by:
DMI(}' — 5)k = N0rFB(j)k — NorFBU — 5)k
DMIQ‘ — 4),{ = N0rFB(j)k — NorFBU — 4),c
DMIU — 3),( = N0rFB(j)k — NorFBU — 3),,
DMI(j — 2),, = N0rFB(j)k — NorFBU — 2),c
DMIU — 1),( = N0rFB(j)k — NorFBU — 1),c
For all the k blocks (for ce k=1,,...2 ,25).
The differential motion intensity for a particular block k of a frame j is
obtained by determining 323 the maximum of all the differences computed at step
322:
DMIU)k = max{DMI(j — 5)k, ...,DMI(j — 1)k}
Then, for a frame, the differential motion intensity of frame j, noted CMI,
cumulated for all the blocks of the entire frame j, is obtained in a step 324 by
summing all differential motion intensities across all the blocks k in the frame.
CMIU) = DMIU)x
The division of the frame into smaller blocks reduces computational requirements for
the processing unit.
In another embodiment, the frame may not be divided into blocks, and hence
step 320 of differential motion intensity determination may only comprise a step 325
of computing a motion intensity difference between framej with each of the previous
set of frames, and a step 326 of determining the differential motion ity of frame
j as the maximum motion intensity difference computed at step 322’. In this
embodiment, there is no intermediate step as the frame is not divided into blocks.
Preferably, the differential motion intensity obtained at step 324 or 326 is
further normalized during a step 327, so that it can then be compared to a common
threshold.
In order to normalize the differential motion intensity, a single sided logistic or
sigmoid on is ably applied. The normalized differential motion intensity
ation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by k
[Annotation] laveryk
ed set by laveryk
ation] laveryk
None set by laveryk
[Annotation] k
MigrationNone set by laveryk
[Annotation] laveryk
Unmarked set by laveryk
for a frame j is noted NCM|(j). In the case of the use of a single-sided sigmoid
function, ) is expressed as s:
NCMIUFW
In this example The obtained normalized differential motion intensity ranges
between 0,5 and 1 given the use of the single sided sigmoid function.
Then, a motion intensity of the video is inferred from the normalized
differential motion intensity of the frames during a step 330. The motion intensity of
the video is preferably a phase variation of the video, in the case where the
differential motion ity of a frame is a differential phase variation.
The motion intensity of the video is preferably obtained by computing the
mean normalized differential motion intensity m(NCMl(j)) of the frames, preferably
over the six or more frames, and even more preferably over the ten or more frames
for which the differential motion intensity of the frames has been computed at step
320.
Last, the liveness score of the frame] is obtained by comparing 340 the
motion intensity of the video, for instance the phase variation of the video, to a
predetermined threshold indicated as Th. The value of this threshold can be
determined empirically based on a database of spoof videos.
For a replay video attack, the ing of a replay video comprises more
frequency ents than a genuine recording of a live biometric characteristic.
For this reason, the mean normalized ential motion intensity should be higher
for a replay video attack than for a genuine recording.
Hence, if the mean normalized differential motion intensity is higher than the
predetermined threshold, the video is considered as being a replay attack video and
the liveness score L8 is set to O.
In an embodiment, if the mean normalized differential motion intensity is
lower than the threshold Th, the video is considered as being a genuine recording of
a biometric characteristic, and the liveness score L8 is set to 1.
This embodiment is summed up as follows:
_ 1,ifm(NCMI(j)) S Th L80) _ {
0, otherwise
[Annotation] laveryk
None set by laveryk
ation] laveryk
MigrationNone set by k
[Annotation] laveryk
Unmarked set by laveryk
[Annotation] laveryk
None set by laveryk
[Annotation] laveryk
MigrationNone set by laveryk
ation] k
Unmarked set by laveryk
In an alternate embodiment, the normalized differential motion intensity can
be compared to another threshold Th’, which is lower than the first Th, and
corresponds to a threshold below which the frame is considered as pertaining to a
recording of a picture (print attack).
Indeed, such videos would have different motion information, and in
ular different phase variation than the live presentation of the individuals.
In that case, in the mean ized differential motion intensity is below the
second threshold Th’, the video is considered as being a print attack video and the
liveness score is set to 0. If the mean normalized differential motion intensity is
between the first and second thresholds Th, Th’, the video is considered as being a
genuine recording of a live biometric characteristic and the liveness score L8 is set
to 1.
This is summarized as follows:
s m(NCMI(j)) s Th
L80.) 2 {1,ifTh’
0, otherwise
Once the step of determining a liveness score has been carried out, a
decision 400 is made as to the authenticity of the whole video.
The decision based on at least ten or eleven frames has shown great
precision and a low error . Therefore the above technique makes it possible
to differentiate between spoof and genuine videos, even with a processing unit
having relatively low computational capacity, such as a sor of a smartphone.
1003189124
Claims (15)
1. A process for determining a video of a biometric characteristic as being a spoof or a genuine recording of a live biometric characteristic, wherein the video comprises a 5 temporal sequence of frames, the process being implemented in a system sing a processing unit, characterized in that said process comprises the steps of: - preprocessing the video, said preprocessing comprising aligning the biometric characteristic on each frame of the video, 10 - determining a liveness score of the video, said determination comprising, for each frame of a plurality of frames: o computing a difference between a motion intensity of a current frame and a motion intensity of each frame of a set of frames preceding said current frame, 15 o inferring, from said differences, a differential motion intensity of the current frame, - inferring a motion intensity of the video from the ential motion intensities of the plurality of frames, - ing said motion intensity of the video to a predetermined first 20 threshold, and assigning a liveness score to the video, depending on the comparison of the motion intensity of the video with the first old, and - according to the liveness scores of the video, determining whether the video is a genuine recording of a biometric teristic or a spoof. 25
2. A process according to claim 1, wherein the motion ity of a frame comprises phase variation of the frame, and motion ity of the video comprises phase variation of the video.
3. A process according to claim 1 or 2, wherein motion intensity of the video is 30 ed from ential motion intensities of at least ten frames of the video.
4. A process according to any one of claims 1 to 3, wherein the set of frames for computation of the motion intensity difference with the current frame comprises from three to eight frames, and preferably five frames preceding the current frame. 1003189124
5. A process according to any one of claims 1 to 4, wherein each frame is divided into non-overlapping blocks of constant size, and the computation of the difference in motion intensity between two frames is carried out by computing, for every block in the current frame, the difference between a motion intensity of the block and a 5 motion intensity of a corresponding block of each of the set of previous frames, and the differential motion intensity of the current frame is obtained by : - electing, for each block of the current frame, the maximum motion intensity difference between said block and the corresponding block of each of the set of previous , and 10 - summing the maximum motion intensities of all the blocks in the frame.
6. A process according to any one of claims 1 to 5, wherein the preprocessing step further comprises a step of magnifying the motion in the video. 15
7. A s according to claim 6, wherein the motion ication is a phasevariation magnification and comprises: - decomposing the video using a Fast Fourier Transform, - applying a bandpass filter to the phase, - applying an ication factor to at least some phase ents, and 20 - implementing an inverse Fourier Transform to reconstruct the video.
8. A s according to any one of claims 1 to 7, n the biometric characteristic on the video is an iris, and the preprocessing of the video comprises selecting frames of the video to remove blinks of the eye.
9. A process according to any one of claims 1 to 8, further comprising comparing the motion intensity of the video to a second threshold, and the step of assigning a liveness score to the video depends on the comparison of the motion intensity with the first and second thresholds.
10. A process according to any of the ing claims, n the differential motion intensity of each frame is further normalized by applying a sigmoid function to said differential motion intensity. 1003189124
11. A process ing to claim 10, wherein the motion intensity of the video is the mean of the normalized differential intensities, and said motion intensity is compared to the first threshold and, if said motion intensity exceeds said first threshold, the video is considered as being a spoof of the replay video type, and the liveness score 5 assigned to the frame is 0.
12. A process ing to claim 10 or 11, comprising comparing the motion intensity of the video to a second threshold, and the step of assigning a liveness score to the video depends on the ison of the motion intensity with the first 10 and second olds, wherein the second threshold is lower than the first threshold, the motion intensity of the video is compared to the second threshold, and, if below said second threshold, the video is considered as being a spoof of the print type and the liveness score assigned to the frame is 0. 15
13. A computer program t, comprising software code adapted to perform the process according to any one of claims 1 to 12 when implemented by a processing unit.
14. A processing unit, characterized in that said processing unit is configured to 20 implement the process for ining a video of a biometric characteristic as being a spoof or a genuine recording of a live biometric characteristic according to any one of claims 1 to 12.
15. An authentication or identification system, comprising a processing unit 25 according to claim 14 and a video camera adapted to acquire a video of a biometric teristic, and to transfer said video to the processing unit. [Annotation] laveryk None set by laveryk [Annotation] laveryk ionNone set by laveryk [Annotation] laveryk Unmarked set by laveryk [Annotation] laveryk None set by laveryk [Annotation] laveryk ionNone set by laveryk [Annotation] laveryk Unmarked set by laveryk [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by laveryk ation] laveryk Unmarked set by laveryk [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by laveryk [Annotation] laveryk ed set by laveryk Loading of a video I Downscaling of the frames L ____________________________ J Determination of differential 320 300 motion ity Determination of video motion intensity Threshold comparison ‘ Decision on video 0 [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by k [Annotation] laveryk Unmarked set by laveryk [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by laveryk ation] laveryk Unmarked set by laveryk ination of differential motion intensity Frame j Division into k blocks 322 For each block compute DM|(j-i)ki=1,..,5 Compute DM|(j-i), 325 i=1,..,5 DM|(j)k= max(DM|(j—i)ki=1,--,5) CMl(j)=maxDM|(j-i), 326 k i=1,..,5 CM/(/') =2 DIM/(0X [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by laveryk [Annotation] laveryk Unmarked set by laveryk [Annotation] laveryk None set by laveryk [Annotation] laveryk MigrationNone set by laveryk [Annotation] k ed set by laveryk Il'"""""1'l """"""'-"""" 1 j- Window
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2015/001534 WO2016113587A1 (en) | 2015-01-13 | 2015-01-13 | Process and system for video spoof detection based on liveness evaluation |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ733764A NZ733764A (en) | 2020-10-30 |
NZ733764B2 true NZ733764B2 (en) | 2021-02-02 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2015377788B2 (en) | Process and system for video spoof detection based on liveness evaluation | |
Wu et al. | Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features | |
CN109948408B (en) | Activity test method and apparatus | |
EP3525165B1 (en) | Method and apparatus with image fusion | |
CN106557726B (en) | Face identity authentication system with silent type living body detection and method thereof | |
Padole et al. | Periocular recognition: Analysis of performance degradation factors | |
Raja et al. | Video presentation attack detection in visible spectrum iris recognition using magnified phase information | |
WO2019033572A1 (en) | Method for detecting whether face is blocked, device and storage medium | |
WO2021135064A1 (en) | Facial recognition method and apparatus, and computer device and storage medium | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
Boulkenafet et al. | Scale space texture analysis for face anti-spoofing | |
US9633284B2 (en) | Image processing apparatus and image processing method of identifying object in image | |
CN111611873A (en) | Face replacement detection method and device, electronic equipment and computer storage medium | |
US11281922B2 (en) | Face recognition system, method for establishing data of face recognition, and face recognizing method thereof | |
Benlamoudi et al. | Face spoofing detection using local binary patterns and fisher score | |
Stuchi et al. | Improving image classification with frequency domain layers for feature extraction | |
Bresan et al. | Facespoof buster: a presentation attack detector based on intrinsic image properties and deep learning | |
Lin et al. | Face detection based on skin color segmentation and SVM classification | |
CN106611417B (en) | Method and device for classifying visual elements into foreground or background | |
Peng | Face recognition at a distance: Low-resolution and alignment problems | |
NZ733764B2 (en) | Process and system for video spoof detection based on liveness evaluation | |
Utami et al. | Face spoof detection by motion analysis on the whole video frames | |
Phan et al. | Using ldp-top in video-based spoofing detection | |
KR102579610B1 (en) | Apparatus for Detecting ATM Abnormal Behavior and Driving Method Thereof | |
Vu et al. | Face Recognition for Video Security Applications |