WO2017098457A1 - A method and a system for determining if the video flow provided by a mobile device is the original one - Google Patents

A method and a system for determining if the video flow provided by a mobile device is the original one Download PDF

Info

Publication number
WO2017098457A1
WO2017098457A1 PCT/IB2016/057477 IB2016057477W WO2017098457A1 WO 2017098457 A1 WO2017098457 A1 WO 2017098457A1 IB 2016057477 W IB2016057477 W IB 2016057477W WO 2017098457 A1 WO2017098457 A1 WO 2017098457A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
mobile device
contrast
built
camera
Prior art date
Application number
PCT/IB2016/057477
Other languages
French (fr)
Inventor
Christophe Remillet
Original Assignee
Onevisage Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Onevisage Sa filed Critical Onevisage Sa
Priority to EP16810078.2A priority Critical patent/EP3387637A1/en
Publication of WO2017098457A1 publication Critical patent/WO2017098457A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09CCIPHERING OR DECIPHERING APPARATUS FOR CRYPTOGRAPHIC OR OTHER PURPOSES INVOLVING THE NEED FOR SECRECY
    • G09C5/00Ciphering apparatus or methods not provided for in the preceding groups, e.g. involving the concealment or deformation of graphic data such as designs, written or printed messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina

Definitions

  • the present invention relates to a method and to a system for determining if the video flow (or a sequence of images) provided by a mobile device corresponds to the original video flow captured by the same mobile device. More precisely, the present invention concerns a method for preventing counterfeiting of images sequence or video stream when a sequence of images or video is captured by a built-in camera or a webcam on a mobile device like a smartphone or tablet or laptop, notably when performing video authentication or identification, including facial authentication or identification.
  • spoofing case consists in making a video replay attack with a separate device such as a smartphone, tablet, PC, beamer or alike which plays a video of the individual that is used to fool 2D or 3D facial authentication systems.
  • This latter spoofing case is rendered easier when the mobile device of the person which identity is to be authenticated by video or images sequence authentication has been stolen by the hacker who then probably will find adequate videos or meaningful images of the owner of the stolen mobile device in the memory of this mobile device or in any other location accessible from the stolen mobile device.
  • WO2015059559 describes an example of a system for performing a video authentication which is a 3D-based identity verification of individuals with mobile devices. Some methods are described to detect spoofing attack situations, but they require comparison of video sequences or of 3D models which needs some important computing resources and some time before giving an answer about possible spoofing.
  • the objective of this invention is to address some or all the shortcomings of the prior art, and to propose a mean to verify safely and simply that the video flow or images sequence provided by a mobile device corresponds to the original video flow captured by the same mobile device.
  • the present invention proposes to define a method to detect and prevent such spoofing situations in an easy way.
  • the present invention aims to provide a solution to verify that the video flow or images sequence provided by a mobile device corresponds to the original video flow or images sequence captured by the same mobile device, which can be adopted at a large scale, which can be used with regular mobile devices without any need for a specific hardware or specific features on the mobile device.
  • video or “video flow” also means “images sequence”, namely a series of images or still pictures which together can be used to form a moving image.
  • an images sequence is considered to form a video when there is a minimum number of still pictures per unit of time of video, which is six frames per second (frame/s), preferably sixteen frames per second to achieve a comfortable illusion of a moving image. Of course higher number of still pictures per unit of time of video can also be used.
  • the subject of the video can be of different type, including the user's face, including any region or any combination of regions of the individual's face (notably the lips area, the nostrils area, the eyes area,...), or any other individual's attribute (such as the face, left hand, right hand, left foot, right foot, left ear and/or right ear, including any portion of these attributes) or any object exposed to the built-in camera of the mobile device.
  • magnitude, or shutter lag is the difference in time between the moment there is a change in the contrast control (change in the light source) of a mobile device and the moment it impacts the image of the video which is captured by the same mobile device.
  • contrast control of the mobile device can be different for each new captured video, this forms a signature or a tagging of the video by contrast changes in the captured video, said signature including the shutter lag value of the mobile device.
  • signature is not retrieved in the provided video, this means that the provided video is not the original one, namely not the captured video.
  • the invention also concerns a system for verifying if the video of a subject provided by a mobile device is the video captured by the built-in camera of said mobile device comprising:
  • a mobile device equipped with built-in camera, a display screen, a wireless communication adapter and a verification mobile application,
  • said mobile device is able to implement the verification method as described in the present text by means of said verification mobile
  • Said verification mobile application is able to implement a method of verifying if the video of a subject provided by a mobile device is the video captured by the built-in camera of said mobile device, according to the present invention.
  • said system further comprises
  • an anti-spoofing transaction server that runs an algorithm used, such as a financial transaction server
  • said anti-spoofing transaction server being able to detect spoofing situations by the following sub steps: said anti-spoofing transaction server sends a verification request to said verification server, said verification server being able to inform the individual about said verification request for the transaction verification server by means of the verification server sending a message to said individual's mobile device, and said mobile device being able to implement the verification method as described in the present text by means of said verification mobile application.
  • Figure 1 is a rear view of a mobile device
  • Figure 2 is a front view of a mobile device
  • Figure 3 is a schematic block diagram of the main video components in a mobile device
  • Figure 4 represents the timing diagram for contrast control changes (Figure 4a), for contrast changes in a captured video ( Figure 4b) and for a provided video ( Figure 4c), in a nominal scenario of the method according to the present invention.
  • the present application refers to providing a method to certify the origin of a video captured by the built-in camera of a mobile device such as an iPhone (registered Trademark) or an Android (registered
  • the built-in rear camera 1 10 or front camera 1 1 1 captures a video 10 of a biometric attribute (for instance the face) of the individual, which video is send to the Digital Signal Processor (DSP) 140 or main processor 150.
  • DSP Digital Signal Processor
  • the built-in rear camera 1 10 or front camera 1 1 1 may be connected to the DSP 140 or processor 150 via a wired connector 1 15 like a USB connector and a connection 1 16 (wired connection or wireless connection).
  • wired connector 1 15 can easily be disconnected by a hacker from the mobile device, namely from the DSP 140 or processor 150. In that situation, the hacker can then easily connect an external video source like a computer to the DSP 140 or processor 150.
  • a hacker can install a malware application on user's mobile device to simulate or emulate a camera and disable the original built-in front and/or rear camera.
  • the video flow 10 received by the DSP 140 or processor 150 is not the original one and the hacker can completely control the content of the video.
  • Another hacking situation consists in placing a second device (not shown) equipped with a screen (smartphone, tablet, PC, monitor, beamer projection ...) in front of the built-in rear camera 1 10 or front camera 1 1 1 of the mobile device 100, and in replaying a recorded video with this second device to fool the application 151 implementing the method according to the invention.
  • the present invention proposes a method which digitally watermark the video.
  • a watermarking creates contrast changes on the images of the captured video in a controlled way.
  • Such a contrast change lies in a gradient or single bright colour change and/or a luminance change when capturing the video.
  • a contrast change consists in a controlled luminance change of the subject, notably of the face of an individual, during the step of the video capture. This luminance change intervenes during one or several new sequence(s) for each video capture so that it cannot be easily spoofed by a hacker.
  • this contrast change can be implemented by either turning on/off the flash or by changing the contrast on the mobile screen 105 like switching the background colour from black to white during the video capture or by displaying a white shape 108 on the mobile screen 105 that can optionally moves on horizontal and/or vertical axis.
  • the method is performing such contrast change detection (or illumination changes detection) in a three dimensional (3D) space and not in a two dimensional (2D) space.
  • the application 151 starts by capturing the video from front camera 1 1 1 at TO (initial point in time, see figure 4b). Then, a contrast change is applied by the mobile device 100 as light source during the capture of the video (either with the frontal flash 121 or with the screen 105) and therefore this contrast change is projected on the user's face as described before.
  • This contrast change sequence of the light source brings a corresponding contrast change sequence on the subject, namely a higher contrast (luminance) at some location(s) (point(s) or portion(s)) of the surface of the subject during the time period P1 (between times T1 and T2).
  • a contrast change is therefore detected at time ⁇ (see figure 4b) when checking if a reflected contrast change occurred in the concerned area(s) of the video flow provided and previously captured.
  • a reflected contrast change between ⁇ and T'2 of the provided video flow which is the same video flow as the captured video flow.
  • the application 151 ends capturing the video and analyses if the contrast changes applied at T1 and T3 by the mobile device (see figure 4a) correspond respectively to the reflected contrast changes detected at ⁇ and T'3 in the provided video, which answer is positive if the provided video is the captured video (see figure 4b).
  • the system checks at least that the starting time ⁇ (T'3) of this reflected contrast change of the provided video corresponds to the starting time T1 (T3) of the controlled contrast change sequence which is applied through contrast control, taking into account the shutter lag SL. Namely the system checks that ⁇ equals to T1 + SL (T'3 equals to T3 + SL).
  • the system can check that the time difference between the starting time ⁇ and the end time T'2 of this reflected contrast change of the provided video corresponds to the same time difference or time period P1 as for the contrast change which is applied through contrast control.
  • Fig. 4c is shown a possible time diagram for the contrast change of a video provided by the mobile device 10, which is not the captured video but a video injected by a hacker. Since the hacker did not know precisely the time phasing of the contrast control created by the mobile device 10, the video provided by the hacker, and played by the mobile device to try to fool the video authentication, has a time phasing shown in Fig 4c which is different than the time phasing of Fig 4b for the captured video forming the original video.
  • a second sequence starts at starting time t3, which is not equal to (is later than) starting time T'3 of the contrast change of the second sequence in the captured video and ends at end time t4 which is not equal to (is later than) end time T'4 of the contrast change of the second sequence of the captured video.
  • start time t3 and end time t4 of the provided video do not correspond to that of the captured video (respectively T'3 and T'4)
  • the system controls if the video provided by the mobile device 100 is the original one, it can establish it is not the original one, namely it is not the video captured by the mobile device under the contrast control of fig 4a, by any of one or several of the following checking procedures.
  • the method detects further also the end time (T'2; t2) of each of the detected contrast change sequences. In that situation, preferably, the method according to the present invention further compares the end time (T'2, t2) of the detected contrast change sequence of the provided video to the end time (T2) of the sequence of said contrast control , taking into account the shutter lag value (SL) of the mobile device 100.
  • This comparison operation is repeated for the end time t2 of the detected first contrast change sequence by checking if the end time t2 of the second detected contrast change sequence is equal or different from the sum of the shutter lag magnitude SL and the end time T2.
  • the application 151 compares the time difference (t2-t1 or t4 -t3) between said end time t2 or t4) and said starting time (t1 or t3) of the detected contrast change sequences of the provided video to the time period (P1 , P2) of the contrast control during said contrast change sequences. This check is done for each contrast change sequence.
  • the invention proposes two preferred but non -limiting possibilities using the mobile device 100 as light source with a contrast change. In that situation, the temporary change of colour and/or luminance of the subject exposed to the built-in camera (1 1 1 , 121) results from a contrast control which is created, at least partially or solely, by the mobile device itself.
  • a first possibility is creating a contrast change by turning on/off the flash, preferably the front flash 121 of the mobile device 10.
  • the flash is turned on and at end time 12, the flash is turned off, creating thereby a different temporary light exposure of the subject, which will generates a contrast reflection change, namely a luminance change, of at least some exposed part of the subject between ⁇ and T'2.
  • the mobile device 100 has a built-in flash (rear flash 120 or front flash 121), and said temporary change of contrast of the subject exposed to the built-in camera 1 1 1 or 121 of the contrast control is created by controlling the triggering time of said built-in flash of the mobile device 10.
  • said built-in flash is the frontal flash 121 of said mobile device 10.
  • this contrast change is created by switching the background colour of the mobile screen 105 from black (or another dark colour) to white (or another light colour) and inversely during the video capture.
  • the mobile application 151 can trigger a contrast change by displaying a white vertical bar 108 on the mobile screen 105 (see Figure 2).
  • This vertical bar 108 can be replaced by any other contrasted shape displayed on the mobile device screen 105, for instance a circle.
  • this vertical bar 108 is moving from the left to the right of the screen and/or inversely during the contrast control (contrast change sequence).
  • said mobile device has a screen 105 and wherein said temporary change of contrast of the subject exposed to the built-in camera 1 1 1 or 121 of the contrast control is created by changing the colour of said screen 105 or a part of said screen of said mobile device and/or by changing the brightness of the screen 105 of said mobile device 100.
  • this contrast change is created by temporary changing the colour (to a lighter colour) and increasing, namely raising, the brightness of the screen 105 of said mobile device.
  • Such a change of colour can be used with a uniform and single colour on the screen or with a non-uniform colour on the screen, such as a gradual colour.
  • such a change of colour is preferably made through a direct change of colour of the screen, from a first colour to a second colour (for instance a direct change from black to white) but also with a
  • the invention proposes also to use a change of the parameters of a light source which is external to the mobile device 10.
  • the change of contrast and luminance of the subject exposed to the built-in camera 1 1 1 , 121 is created by the contrast control resulting at least or totally from a light source which is external to the mobile device 10, this external light source being projecting light on the subject of the video during the video capture, such as individual attributes (face, hands, ...), with a change, namely a raise, in light intensity during the contrast change sequence.
  • the external light source is a class 1 laser beam
  • the tagging of the video captured by the mobile device by contrast changes can be implemented by several contrast means, depending on the technical features of the mobile device and depending on the ambient lighting conditions. [0041] If the contrast changes detected at ⁇ and T'3 match respectively T1 + SL and T2+ SL, it means that the video stream handled by the DSP 140 or processor 150 is normally the original video captured by the built-in mobile camera 1 1 1 or 121.
  • the method can further check if the contrast changes detected at ⁇ and T'3 match respectively periods P1 and P2, which means that the video stream handled by the DSP 140 or processor 150 is the original video captured by the built-in mobile camera 1 1 1 or 121.
  • the method can further check if the reflection areas detected are consistent with the expected positions and/or if the shutter lag SL is consistent with the photographic or user mobile photographic shutter lag magnitude.
  • Such a shutter lag magnitude SL1 - SL2 (in seconds with SL2 > SL1) can be established during a test phase or an enrolment phase during which this shutter lag of the mobile device is measured at several times. All the measured shutter lags give a range SL1 , SL2 and also a mean value used as shutter lag SL for the calculations and checking, which thereby have a variance estimated (about 50 to 200 milliseconds).
  • the previously described method can be repeated one more time (using two contrast change sequences during the capture of the video) or multiple other times (using more than two contrast change sequences during the capture of the video).
  • the shutter lag SL of the second contrast change sequence (namely the difference time T'3 - T3) is the same as for the first contrast change sequence (namely ⁇ - ⁇ 1 or SL).
  • contrast changes are analysed and shutter lag magnitudes are calculated in real time, which means not waiting the end of the video capture at T5 to start running such analysis.
  • the application 151 can also detect the formation of a halo 20 generated by a reflective surface (see Figure 2). In that situation, the method according to the invention further analyses the images of the video flow of the provided video and detects the existence of any halo that would indicate a probable video replay fraud situation.
  • a halo can be a circular halo. For example if the hacker uses an external smartphone when applying the contrast change at T1 , a light halo 20 will be captured at TV by the front camera 1 1 1 or rear camera 1 10 by checking a characterized gradient white contrast in a circular shape.
  • the user may be prompted to turn around the mobile device 10 in order to use rear flash 1 1 1 while enabling rear camera 1 10 to capture the video 10.
  • the individual is asked, before or during video capture, to move his head in front of the mobile device or to pan the mobile device around his face.
  • Such a relative movement between the face (or more generally the individual's attribute) and the mobile device has preferably six degrees of freedom, notably as described in patent application WO 2015/059559.
  • the facial identification or authentication application 151 is applying at least one contrast change sequence as described before, by preferably applying gradient or single bright colour and/or luminance changes on the mobile device 100 (for instance screen 105 or flash 120/121).
  • this contrast change sequence is adapted to the head pose estimated in real time. Due to the short distance between the face and the camera of the mobile device (built-in front camera 1 1 1 or built-in rear camera 1 10) and the fact that built-in consumer grade front mobile cameras come with a VGA resolution at least (640x480), the applicant observed that capturing a reflected area of 100 pixels minimum in the lips region can be performed in a consistent way.
  • the contrast change sequence applied during the video capture is compared to the contrast change sequence detected in the video provided by analysing the reflection primarily in the lips region and optionally in the eyes region and/or nostrils region.
  • the provided video is first analysed to select the portion(s) of the image corresponding to the lips region and optionally the eyes region and/or nostrils region before making the above mentioned comparison.
  • this analysis of the provided video is implemented by also taking into account the head pose in the image to determine exactly which area of the face, or even which area of the lips must be selected, and the
  • the lip region is a colour consistent region, namely a face region with a magenta colour for most of the individuals, where thereby reflection changes can be easily detected for most of the individuals.
  • the contrast change sequence applied during the video capture is compared to the contrast change sequence detected in the video provided also by analysing the reflection changes in the eyes (comprising pupil and iris and sclera) and/or reflection changes in nostrils region.
  • the eyes and nostrils region of the face are, like the lips, colour consistent regions, where reflection changes can be easily detected.
  • This approach which increases further the entropy of the method, can be generally extended to detecting dark areas on the face which are poorly illuminated and where any single bright colour and luminance changes can be easily detected.
  • the application 151 can further reject the provided video as being not the one captured by the built-in mobile camera in case of un- matching or inconsistent contrast changes, based on analysis of any of or several of the followings: the reflected surfaces and positions of the subject (the face or more generally the individual's attribute) with respect of the mobile device's camera, starting times, end times and/or periods between the detected contrast change sequence(s) and the contrast control sequences used during the capture of the video.
  • the method according to the invention further comprises, before the capture of the video, determining randomly the number of contrast change sequences, starting times and periods of each of said contrast change sequences, taking into account a maximum time allowed to capture the video.
  • the method according to the invention further comprises during the video capture determining the contrast change sequences according to the detection of specific subject poses, like the orientation of the face with respect of the mobile device's camera.
  • the method according to the invention further comprises during the video capture a contrast change activation in real-time when shadow areas are detected on the video. This provision can be used to change the parameters of the contrast control to optimize the effect of the contrast change control, namely obtaining an enhanced contrast change. For instance, a face partially turned away with respect to the camera, namely placed three-quarters on to the camera, allows a maximum reflection of the light on the side of the nose.
  • the application 151 can be configured with a default and write-once mean shutter lag value SLM and variance value SLV which can be determined during the user enrolment phase by capturing one or more videos of the individual attribute with the user mobile. In that case, any contrast change detected at a time ⁇ greater than the sum between the starting time T1, the mean shutter SLM lag and the maximum variance value SLV will be interpreted as potential serious hacking situation.
  • the application 151 can pre-check if video capture conditions are valid or applicable. Outdoor with high luminance won't be good conditions for detecting contrast changes as any reflected contrast changes will be detectable between TO and T5. In such case, in a preferred embodiment, the application 151 can inform the user the video capturing conditions are not valid and prompt the user to move indoor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present invention concerns a method of verifying if the video of a subject provided by a mobile device (10) is the video captured, comprising:- capturing a video of said subject with said built-in camera (111, 121), with at least one contrast change sequence, - providing a video of said subject with said mobile device (10), detecting at least the starting time (T'1; t1) and the end time (T'2; t2) of each of the contrast change sequences detected in the provided video,- comparing the starting and end times (T'1, t1, T'2, t2) of the detected contrast change sequence of the provided video respectively to the starting and end times (T1, T2) of the sequence of said contrast control, taking into account the shutter lag value of the mobile device (SL), and establishing if the provided video corresponds to the captured video.

Description

A method and a system for determining if the video flow provided by a mobile device is the original one
Field of the invention
[0001] The present invention relates to a method and to a system for determining if the video flow (or a sequence of images) provided by a mobile device corresponds to the original video flow captured by the same mobile device. More precisely, the present invention concerns a method for preventing counterfeiting of images sequence or video stream when a sequence of images or video is captured by a built-in camera or a webcam on a mobile device like a smartphone or tablet or laptop, notably when performing video authentication or identification, including facial authentication or identification.
[0002] Nowadays, despite all these mobile devices and related digital services provide great values to end-users, there is a growing concern about security and more particularly about identity theft through spoofing techniques than can be easily reproduced. Description of related art
[0003] In the current art, there are mobile devices in the market where the original cameras can be disconnected from the mobile device and where a hacker can inject a fake video stream or images sequence in the mobile device, which video stream or images sequence can be fully controlled by the hacker. As for mobile device, the present text means also wireless or handheld device including smartphones and tablets and the like.
[0004] In other situations, spoofing case consists in making a video replay attack with a separate device such as a smartphone, tablet, PC, beamer or alike which plays a video of the individual that is used to fool 2D or 3D facial authentication systems. This latter spoofing case is rendered easier when the mobile device of the person which identity is to be authenticated by video or images sequence authentication has been stolen by the hacker who then probably will find adequate videos or meaningful images of the owner of the stolen mobile device in the memory of this mobile device or in any other location accessible from the stolen mobile device. [0005] WO2015059559 describes an example of a system for performing a video authentication which is a 3D-based identity verification of individuals with mobile devices. Some methods are described to detect spoofing attack situations, but they require comparison of video sequences or of 3D models which needs some important computing resources and some time before giving an answer about possible spoofing.
[0006] "Face recognition on consumer devices: reflections on replay attacks" paper issued to Daniel F. Smith et al. in IEEE publication of April 2015 - volume 10, pages 736-745, discloses approaches to either analyse the eyes region by checking if a bright reflection occurred against the dark eyes or analyse the whole face by switching from black to white or using multiple colour changes to augment entropy. Notably, in an embodiment, the screen of the mobile device displays a colour image that is reflected by the face, this face reflection being captured by the camera. This way, when applying multiple colour changes by the mobile device, the corresponding multiple colour changes in the captured video create a watermark to the video that can be used to distinguish an original video from a non-original video. Such a method could nevertheless be fooled by using an already available video and applying the same multiple colour changes.
[0007] The objective of this invention is to address some or all the shortcomings of the prior art, and to propose a mean to verify safely and simply that the video flow or images sequence provided by a mobile device corresponds to the original video flow captured by the same mobile device.
[0008] The present invention proposes to define a method to detect and prevent such spoofing situations in an easy way. [0009] Also, the present invention aims to provide a solution to verify that the video flow or images sequence provided by a mobile device corresponds to the original video flow or images sequence captured by the same mobile device, which can be adopted at a large scale, which can be used with regular mobile devices without any need for a specific hardware or specific features on the mobile device.
[0010] In the following text "video" or "video flow" also means "images sequence", namely a series of images or still pictures which together can be used to form a moving image. According to the present invention, an images sequence is considered to form a video when there is a minimum number of still pictures per unit of time of video, which is six frames per second (frame/s), preferably sixteen frames per second to achieve a comfortable illusion of a moving image. Of course higher number of still pictures per unit of time of video can also be used. Brief summary of the invention
[0011] According to the invention, these aims are achieved by means of a method of verifying if the video of a subject provided by a mobile device is the video captured by the built-in camera of said mobile device, according to claim 1. [0012] In the context of the present invention, the subject of the video can be of different type, including the user's face, including any region or any combination of regions of the individual's face (notably the lips area, the nostrils area, the eyes area,...), or any other individual's attribute (such as the face, left hand, right hand, left foot, right foot, left ear and/or right ear, including any portion of these attributes) or any object exposed to the built-in camera of the mobile device.
[0013] In the context of the present invention, the shutter lag
magnitude, or shutter lag, is the difference in time between the moment there is a change in the contrast control (change in the light source) of a mobile device and the moment it impacts the image of the video which is captured by the same mobile device.
[0014] Therefore, as the contrast change sequence which is
implemented by the contrast control of the mobile device can be different for each new captured video, this forms a signature or a tagging of the video by contrast changes in the captured video, said signature including the shutter lag value of the mobile device. When such signature is not retrieved in the provided video, this means that the provided video is not the original one, namely not the captured video. [0015] Other features of the method according to the present invention are defined in sub-claims 2 to 23.
[0016] The invention also concerns a system for verifying if the video of a subject provided by a mobile device is the video captured by the built-in camera of said mobile device comprising:
- a mobile device equipped with built-in camera, a display screen, a wireless communication adapter and a verification mobile application,
wherein said mobile device is able to implement the verification method as described in the present text by means of said verification mobile
application. Said verification mobile application is able to implement a method of verifying if the video of a subject provided by a mobile device is the video captured by the built-in camera of said mobile device, according to the present invention.
[0017] In another embodiment, said system further comprises
- an anti-spoofing transaction server that runs an algorithm used, such as a financial transaction server, and
- a transaction verification server,
said anti-spoofing transaction server being able to detect spoofing situations by the following sub steps: said anti-spoofing transaction server sends a verification request to said verification server, said verification server being able to inform the individual about said verification request for the transaction verification server by means of the verification server sending a message to said individual's mobile device, and said mobile device being able to implement the verification method as described in the present text by means of said verification mobile application.
Brief Description of the Drawings [0018] The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which:
Figure 1 is a rear view of a mobile device,
Figure 2 is a front view of a mobile device, Figure 3 is a schematic block diagram of the main video components in a mobile device, and
Figure 4 represents the timing diagram for contrast control changes (Figure 4a), for contrast changes in a captured video (Figure 4b) and for a provided video (Figure 4c), in a nominal scenario of the method according to the present invention.
Detailed Description of possible embodiments of the Invention
[0019] The present application refers to providing a method to certify the origin of a video captured by the built-in camera of a mobile device such as an iPhone (registered Trademark) or an Android (registered
Trademark) mobile device or a laptop equipped with a webcam.
[0020] At the system level, the built-in rear camera 1 10 or front camera 1 1 1 captures a video 10 of a biometric attribute (for instance the face) of the individual, which video is send to the Digital Signal Processor (DSP) 140 or main processor 150. Depending upon the model of the mobile device 100, the built-in rear camera 1 10 or front camera 1 1 1 may be connected to the DSP 140 or processor 150 via a wired connector 1 15 like a USB connector and a connection 1 16 (wired connection or wireless connection). Such wired connector 1 15 can easily be disconnected by a hacker from the mobile device, namely from the DSP 140 or processor 150. In that situation, the hacker can then easily connect an external video source like a computer to the DSP 140 or processor 150. Alternatively a hacker can install a malware application on user's mobile device to simulate or emulate a camera and disable the original built-in front and/or rear camera. In such case, the video flow 10 received by the DSP 140 or processor 150 is not the original one and the hacker can completely control the content of the video. Another hacking situation consists in placing a second device (not shown) equipped with a screen (smartphone, tablet, PC, monitor, beamer projection ...) in front of the built-in rear camera 1 10 or front camera 1 1 1 of the mobile device 100, and in replaying a recorded video with this second device to fool the application 151 implementing the method according to the invention.
[0021] To void counterfeiting of the video flow 10 or detect spoofing situations, the present invention proposes a method which digitally watermark the video. Such a watermarking creates contrast changes on the images of the captured video in a controlled way. Such a contrast change lies in a gradient or single bright colour change and/or a luminance change when capturing the video. In other words, such a contrast change consists in a controlled luminance change of the subject, notably of the face of an individual, during the step of the video capture. This luminance change intervenes during one or several new sequence(s) for each video capture so that it cannot be easily spoofed by a hacker.
[0022] For instance, this contrast change can be implemented by either turning on/off the flash or by changing the contrast on the mobile screen 105 like switching the background colour from black to white during the video capture or by displaying a white shape 108 on the mobile screen 105 that can optionally moves on horizontal and/or vertical axis. [0023] In a preferred embodiment, the method is performing such contrast change detection (or illumination changes detection) in a three dimensional (3D) space and not in a two dimensional (2D) space.
[0024] In the preferred embodiment, when running a 3D facial authentication, referring to figures 2, 4a, 4b and 4c, the method
implemented by the application 151 starts by capturing the video from front camera 1 1 1 at TO (initial point in time, see figure 4b). Then, a contrast change is applied by the mobile device 100 as light source during the capture of the video (either with the frontal flash 121 or with the screen 105) and therefore this contrast change is projected on the user's face as described before. This contrast change of the light source is applied at starting time T1 during a time period P1 finishing at end time T2 (time difference T2-T1 = P1 , see figure 4a), whereas the video is still captured after T2 (see figure 4b). Therefore, the contrast change of the light source between times T1 and T2 forms one contrast change sequence during the capture time of the video. This contrast change sequence of the light source brings a corresponding contrast change sequence on the subject, namely a higher contrast (luminance) at some location(s) (point(s) or portion(s)) of the surface of the subject during the time period P1 (between times T1 and T2). This higher contrast (luminance) corresponds to more reflected light by this (these) location(s) of the surface of the subject in direction to the built-in camera 1 10 or 1 1 1 , resulting therefore to a higher contrast on (some part(s)) the images of the video flow during the contrast change sequence (with a time offset by the shutter lag SL), namely between ΤΊ and T'2 (with T'2-ΤΊ = P1).
[0025] During further analysis of the captured video 10, a contrast change is therefore detected at time ΤΊ (see figure 4b) when checking if a reflected contrast change occurred in the concerned area(s) of the video flow provided and previously captured. In the example shown in Fig.4b, there is a reflected contrast change between ΤΊ and T'2 of the provided video flow which is the same video flow as the captured video flow. Like in any photography system, the time difference between ΤΊ (start point of the reflected contrast change in the provided video flow) and T1 (start point of the controlled contrast change during the capture of the video flow) corresponds to the shutter lag SL (time difference ΤΊ -Τ1 = SL and time difference "Γ2-Τ2 = SL , see Figure 4.b), which is fixed and
representative of the optical system of the camera. In the shown example, at time T5, the application 151 ends capturing the video and analyses if the contrast changes applied at T1 and T3 by the mobile device (see figure 4a) correspond respectively to the reflected contrast changes detected at ΤΊ and T'3 in the provided video, which answer is positive if the provided video is the captured video (see figure 4b). [0026] When analysing the reflected contrast changes captured in the provided video, the system checks at least that the starting time ΤΊ (T'3) of this reflected contrast change of the provided video corresponds to the starting time T1 (T3) of the controlled contrast change sequence which is applied through contrast control, taking into account the shutter lag SL. Namely the system checks that ΤΊ equals to T1 + SL (T'3 equals to T3 + SL).
[0027] Also, when analysing the reflected contrast changes, the system can check that the time difference between the starting time ΤΊ and the end time T'2 of this reflected contrast change of the provided video corresponds to the same time difference or time period P1 as for the contrast change which is applied through contrast control.
[0028] In Fig. 4c, is shown a possible time diagram for the contrast change of a video provided by the mobile device 10, which is not the captured video but a video injected by a hacker. Since the hacker did not know precisely the time phasing of the contrast control created by the mobile device 10, the video provided by the hacker, and played by the mobile device to try to fool the video authentication, has a time phasing shown in Fig 4c which is different than the time phasing of Fig 4b for the captured video forming the original video.
[0029] On the example of fig 4c, there are also two sequences of contrast change contained on the video flow of the provided video (video replay). A first sequence starts at starting time t1 , which is equal to starting time ΤΊ of the contrast change in the captured video and ends at end time t2 which is not equal to (is later than) end time T'2 of the contrast change in the first sequence of contrast change in the captured video. Therefore, the time period p1 of the first contrast change sequence in the provided video is different from the time period P1 of the first contrast change sequence in the captured video (t2-t1 =p1 different from T'2-T'1 =P1). In the provided video, a second sequence starts at starting time t3, which is not equal to (is later than) starting time T'3 of the contrast change of the second sequence in the captured video and ends at end time t4 which is not equal to (is later than) end time T'4 of the contrast change of the second sequence of the captured video. In this example, even if start time t3 and end time t4 of the provided video do not correspond to that of the captured video (respectively T'3 and T'4), the time period p2 of the second contrast change sequence in the provided video is (by coincidence) equal to the time period P2 of the second contrast change sequence in the captured video (t4-t3=p2 equal to T'4-T'3=P2).
[0030] Therefore, when the system controls if the video provided by the mobile device 100 is the original one, it can establish it is not the original one, namely it is not the video captured by the mobile device under the contrast control of fig 4a, by any of one or several of the following checking procedures.
[0031] As a possible checking procedure, the application 151 compares the starting time (ΤΊ , t1) of the detected contrast change sequence of the provided video to the starting time (T1) of the sequence of said contrast control, taking into account the shutter lag magnitude (SL) of the mobile device 10. This check is done for each contrast change sequence. For the video provided as in Fig 4c, since the starting time t1 of the first detected contrast change sequence is equal to the sum of the shutter lag magnitude SL and the starting time T1 of the first contrast change sequence (ΤΊ = T1 + SL = t1), this means the video provided can be the captured video. This operation is repeated for the second detected contrast change sequence: in that case, the starting time t3 of the second detected contrast change sequence is not equal to the sum of the shutter lag magnitude SL and the starting time T3 of the second contrast change sequence (T'3 = T3 + SL different from t3), this means the video provided is not the original captured video.
[0032] In a preferred embodiment, if the video provided by said mobile device 100 contains temporary contrast changes, the method detects further also the end time (T'2; t2) of each of the detected contrast change sequences. In that situation, preferably, the method according to the present invention further compares the end time (T'2, t2) of the detected contrast change sequence of the provided video to the end time (T2) of the sequence of said contrast control , taking into account the shutter lag value (SL) of the mobile device 100.
[0033] As another possible checking procedure, the application 151 compares the starting time (ΤΊ, t1) and the end time (T'2, t2) of the detected contrast change sequence of the provided video respectively to the starting time (T1) and to the end time (T2) of the sequence of said contrast control, taking into account the shutter lag magnitude (SL) of the mobile device 10. This check is done for each contrast change sequence. For the video provided as in Fig 4c, since the starting time t1 of the first detected contrast change sequence is equal to the sum of the shutter lag magnitude SL and the starting time T1 of the first contrast change (T'1 =T1 + SL = t1), this means the video provided can be the captured video. This comparison operation is repeated for the end time t2 of the detected first contrast change sequence by checking if the end time t2 of the second detected contrast change sequence is equal or different from the sum of the shutter lag magnitude SL and the end time T2. Here t2 # T'2=T2 + SL, which means the video provided is not the original captured video. The same conclusion of the non-original character of the provided video arises when the comparison is done with the starting time t3 and also with the end time t4 of the detected second contrast change sequence (t3 is different from T3 + SL = T'3 and t4 is different from T4 + SL = T'4).
[0034] As another possible checking procedure, the application 151 compares the time difference (t2-t1 or t4 -t3) between said end time t2 or t4) and said starting time (t1 or t3) of the detected contrast change sequences of the provided video to the time period (P1 , P2) of the contrast control during said contrast change sequences. This check is done for each contrast change sequence. For the video provided as in Fig 4c, even if there is a positive reply when checking this feature for the detected second contrast change sequence (values are equal, namely t4 -t3 or p2= P2), since for the detected first contrast change sequence there is a negative reply (values are different, namely t2-t1 or p1 is different from P1), this means the video provided is not the captured video. [0035] For controlling the contrast change on the images of the captured video, as previously mentioned, the invention proposes two preferred but non -limiting possibilities using the mobile device 100 as light source with a contrast change. In that situation, the temporary change of colour and/or luminance of the subject exposed to the built-in camera (1 1 1 , 121) results from a contrast control which is created, at least partially or solely, by the mobile device itself.
[0036] A first possibility is creating a contrast change by turning on/off the flash, preferably the front flash 121 of the mobile device 10. At starting time T1 , the flash is turned on and at end time 12, the flash is turned off, creating thereby a different temporary light exposure of the subject, which will generates a contrast reflection change, namely a luminance change, of at least some exposed part of the subject between ΤΊ and T'2. In other words, according to this first possibility, the mobile device 100 has a built-in flash (rear flash 120 or front flash 121), and said temporary change of contrast of the subject exposed to the built-in camera 1 1 1 or 121 of the contrast control is created by controlling the triggering time of said built-in flash of the mobile device 10. Preferably, said built-in flash is the frontal flash 121 of said mobile device 10.
[0037] There is a second possibility, which is also convenient for mobile devices without flash, and furthermore with no front flash. This consists in changing strongly the contrast and also eventually the brightness of the mobile screen 105 or a region of the mobile screen 105 which faces the object a video of which will be captured by the built-in camera of the mobile device (like a user's face, an individual attribute or any object exposed to the built-in camera). In that case, instead of using the flash of the mobile device, when ambient lighting conditions are applicable the contrast change is implemented by changing on all the screen or on a region or on multiple regions of the screen the colour and also optionally the brightness.
[0038] For instance, this contrast change is created by switching the background colour of the mobile screen 105 from black (or another dark colour) to white (or another light colour) and inversely during the video capture. According to another preferred embodiment, the mobile application 151 can trigger a contrast change by displaying a white vertical bar 108 on the mobile screen 105 (see Figure 2). This vertical bar 108 can be replaced by any other contrasted shape displayed on the mobile device screen 105, for instance a circle. Also, preferably, this vertical bar 108 is moving from the left to the right of the screen and/or inversely during the contrast control (contrast change sequence).
[0039] In a possible embodiment, said mobile device has a screen 105 and wherein said temporary change of contrast of the subject exposed to the built-in camera 1 1 1 or 121 of the contrast control is created by changing the colour of said screen 105 or a part of said screen of said mobile device and/or by changing the brightness of the screen 105 of said mobile device 100. In a preferred embodiment, this contrast change is created by temporary changing the colour (to a lighter colour) and increasing, namely raising, the brightness of the screen 105 of said mobile device. Such a change of colour can be used with a uniform and single colour on the screen or with a non-uniform colour on the screen, such as a gradual colour. Also, such a change of colour is preferably made through a direct change of colour of the screen, from a first colour to a second colour (for instance a direct change from black to white) but also with a
progressive change of the colour of the screen (for instance a progressive change of colour from black to white, through intermediate gray colours progressively lighter and lighter). In another possible embodiment, for controlling the contrast change on the images of the captured video, the invention proposes also to use a change of the parameters of a light source which is external to the mobile device 10. In that situation, the change of contrast and luminance of the subject exposed to the built-in camera 1 1 1 , 121 is created by the contrast control resulting at least or totally from a light source which is external to the mobile device 10, this external light source being projecting light on the subject of the video during the video capture, such as individual attributes (face, hands, ...), with a change, namely a raise, in light intensity during the contrast change sequence. For instance, the external light source is a class 1 laser beam,
[0040] Therefore, the tagging of the video captured by the mobile device by contrast changes can be implemented by several contrast means, depending on the technical features of the mobile device and depending on the ambient lighting conditions. [0041] If the contrast changes detected at ΤΊ and T'3 match respectively T1 + SL and T2+ SL, it means that the video stream handled by the DSP 140 or processor 150 is normally the original video captured by the built-in mobile camera 1 1 1 or 121. For more restricted conditions to check the original character of the video captured by the built-in mobile camera 1 1 1 or 121 , the method can further check if the contrast changes detected at ΤΊ and T'3 match respectively periods P1 and P2, which means that the video stream handled by the DSP 140 or processor 150 is the original video captured by the built-in mobile camera 1 1 1 or 121. For even more restricted conditions to check the original character of the video captured by the built-in mobile camera 1 1 1 or 121 , the method can further check if the reflection areas detected are consistent with the expected positions and/or if the shutter lag SL is consistent with the photographic or user mobile photographic shutter lag magnitude. Otherwise, it means the video captured is either not the real one and this is a case of spoofing attack by video injection/control or that the camera captured a video stream which is not the original one and this is a case of spoofing attack by video replay. [0042] Such a shutter lag magnitude SL1 - SL2 (in seconds with SL2 > SL1) can be established during a test phase or an enrolment phase during which this shutter lag of the mobile device is measured at several times. All the measured shutter lags give a range SL1 , SL2 and also a mean value used as shutter lag SL for the calculations and checking, which thereby have a variance estimated (about 50 to 200 milliseconds).
[0043] In another embodiment, in case of difficulties to detect a reflected contrast change, the previously described method can be repeated one more time (using two contrast change sequences during the capture of the video) or multiple other times (using more than two contrast change sequences during the capture of the video). The second period P2 of the second contrast change sequence can be extended to significantly increase the chance of detecting such contrast change as shown in Figure 4 at starting time T3 and end time T4, with the time period P2 (T4-T3=P2) longer than the time period P1 , during which the contrast change is controlled as previously described. Since the built-in camera is unchanged, the shutter lag SL of the second contrast change sequence (namely the difference time T'3 - T3) is the same as for the first contrast change sequence (namely ΤΊ -Τ1 or SL). [0044] In another embodiment of the method according to the invention, contrast changes are analysed and shutter lag magnitudes are calculated in real time, which means not waiting the end of the video capture at T5 to start running such analysis.
[0045] In another embodiment of the method according to the invention, the application 151 can also detect the formation of a halo 20 generated by a reflective surface (see Figure 2). In that situation, the method according to the invention further analyses the images of the video flow of the provided video and detects the existence of any halo that would indicate a probable video replay fraud situation. Such a halo can be a circular halo. For example if the hacker uses an external smartphone when applying the contrast change at T1 , a light halo 20 will be captured at TV by the front camera 1 1 1 or rear camera 1 10 by checking a characterized gradient white contrast in a circular shape. In case of difficulties to detect the reflected contrast change because the hacker is using an absorbing or matt material, like the surface of a wall when using a beamer, the user may be prompted to turn around the mobile device 10 in order to use rear flash 1 1 1 while enabling rear camera 1 10 to capture the video 10.
[0046] In a preferred embodiment of the method according to the invention, the individual is asked, before or during video capture, to move his head in front of the mobile device or to pan the mobile device around his face. Such a relative movement between the face (or more generally the individual's attribute) and the mobile device, has preferably six degrees of freedom, notably as described in patent application WO 2015/059559.
During that recording phase (video capture), the facial identification or authentication application 151 is applying at least one contrast change sequence as described before, by preferably applying gradient or single bright colour and/or luminance changes on the mobile device 100 (for instance screen 105 or flash 120/121). Preferably, this contrast change sequence is adapted to the head pose estimated in real time. Due to the short distance between the face and the camera of the mobile device (built-in front camera 1 1 1 or built-in rear camera 1 10) and the fact that built-in consumer grade front mobile cameras come with a VGA resolution at least (640x480), the applicant observed that capturing a reflected area of 100 pixels minimum in the lips region can be performed in a consistent way. [0047] In another embodiment of the method according to the invention, the contrast change sequence applied during the video capture is compared to the contrast change sequence detected in the video provided by analysing the reflection primarily in the lips region and optionally in the eyes region and/or nostrils region. This means that the provided video is first analysed to select the portion(s) of the image corresponding to the lips region and optionally the eyes region and/or nostrils region before making the above mentioned comparison. Preferably this analysis of the provided video is implemented by also taking into account the head pose in the image to determine exactly which area of the face, or even which area of the lips must be selected, and the
corresponding portion of the images should be used to make the above comparison (adaptation to the head pose estimated in real time). The lip region is a colour consistent region, namely a face region with a magenta colour for most of the individuals, where thereby reflection changes can be easily detected for most of the individuals.
[0048] In another embodiment of the method according to the invention, the contrast change sequence applied during the video capture is compared to the contrast change sequence detected in the video provided also by analysing the reflection changes in the eyes (comprising pupil and iris and sclera) and/or reflection changes in nostrils region. The eyes and nostrils region of the face are, like the lips, colour consistent regions, where reflection changes can be easily detected. This approach which increases further the entropy of the method, can be generally extended to detecting dark areas on the face which are poorly illuminated and where any single bright colour and luminance changes can be easily detected.
[0049] In some possible embodiment of the method according to the invention, the application 151 can further reject the provided video as being not the one captured by the built-in mobile camera in case of un- matching or inconsistent contrast changes, based on analysis of any of or several of the followings: the reflected surfaces and positions of the subject (the face or more generally the individual's attribute) with respect of the mobile device's camera, starting times, end times and/or periods between the detected contrast change sequence(s) and the contrast control sequences used during the capture of the video.
[0050] As the application 151 can randomly change the contrast change starting times (T1 , T3 ...), the period lengths (P1 , P2, ...) and occurrences of contrast changes, for example using three contrast changes instead of two, there is an infinite combination of contrast change sequences which can be very hardly reproduced by a hacking system. For instance, the method according to the invention further comprises, before the capture of the video, determining randomly the number of contrast change sequences, starting times and periods of each of said contrast change sequences, taking into account a maximum time allowed to capture the video. [0051] Also, in a possible embodiment, the method according to the invention further comprises during the video capture determining the contrast change sequences according to the detection of specific subject poses, like the orientation of the face with respect of the mobile device's camera. [0052] Also, in another possible embodiment, the method according to the invention further comprises during the video capture a contrast change activation in real-time when shadow areas are detected on the video. This provision can be used to change the parameters of the contrast control to optimize the effect of the contrast change control, namely obtaining an enhanced contrast change. For instance, a face partially turned away with respect to the camera, namely placed three-quarters on to the camera, allows a maximum reflection of the light on the side of the nose.
[0053] In another embodiment of the method according to the invention, a default mean shutter lag SLM value and a default shutter lag range (SL1, SL2) are determined for the camera of the mobile device provided, and the application 151 detects and compares said shutter lag magnitude (SL) of the provided video to the mean shutter lag SLM value of the mobile device, and optionally the application 151 detects and compares as well as the variance of the shutter lag SLV value of the provided video to said default shutter lag range (to check whether the variation or variance of the shutter lag of the provided video stays within the default shutter lag range [SL1 , SL2], with SL1 = SLM -SLV and SL2 = SLM+SLV). In that situation, in order to strengthen the reliability of the method and system and makes any video counterfeiting situation almost impossible, the application 151 can be configured with a default and write-once mean shutter lag value SLM and variance value SLV which can be determined during the user enrolment phase by capturing one or more videos of the individual attribute with the user mobile. In that case, any contrast change detected at a time ΤΊ greater than the sum between the starting time T1, the mean shutter SLM lag and the maximum variance value SLV will be interpreted as potential serious hacking situation. [0054] Depending on situations encountered, the application 151 can pre-check if video capture conditions are valid or applicable. Outdoor with high luminance won't be good conditions for detecting contrast changes as any reflected contrast changes will be detectable between TO and T5. In such case, in a preferred embodiment, the application 151 can inform the user the video capturing conditions are not valid and prompt the user to move indoor.
Reference numbers used in figures
10 Video of the individual (face, left hand ...)
20 Light halo 100 Mobile Device
105 Mobile device screen
108 Contrasted shape displayed on mobile device screen
1 10 Rear camera
1 1 1 Front camera 1 15 Rear/Front camera connector
1 16 Connection between rear/front camera to DSP or processor
120 Rear flash
121 Front flash
140 Digital Signal Processor (for image processing) 150 Mobile processor or micro-processor
151 Application that runs the photo/video anti-spoofing algorithm
TO Initial point in time of the video
T1 Starting time of the contrast change by the contrast control (first sequence) 12 End time of the contrast change by the contrast control (first sequence)
P1 Time period of the first contrast change sequence
T3 Starting time of the contrast change by the contrast control (second sequence) T4 End time of the contrast change by the contrast control (second sequence) P2 Time period of the second contrast change sequence T5 End time of video capture
ΤΊ Starting time of the contrast change in the captured video (first sequence) T'2 End time of the contrast change in the captured video (first sequence)
T'3 Starting time of the contrast change in the captured video (second sequence)
T'4 End time of the contrast change in the captured video (second sequence)
SL Shutter lag in the captured video t1 Starting time of the contrast change in the provided video (first sequence) t2 End time of the contrast change in the provided video (first sequence) p1 Time period of the first contrast change sequence in the provided video t3 Starting time of the contrast change in the provided video (second sequence) t4 End time of the contrast change in the provided video (second sequence) p2 Time period of the second contrast change sequence in the provided video

Claims

Claims
1. A method of verifying if the video (10) of a subject provided by a mobile device (100) is the video captured by the built-in camera (1 1 1 , 121) of said mobile device (100), comprising:
- providing a mobile device (100) comprising a built-in camera (1 1 1 , 121) with a shutter lag value (SL),
- capturing a video of said subject with said built-in camera (1 1 1 , 121), with at least one contrast change sequence during which the colour and/or luminance of the subject exposed to the built-in camera (1 1 1 , 121) is temporarily changed by contrast control between a starting time (T1) and an end time (T2) forming a time period (P1), said contrast change sequence of the captured video defining at least a starting time (ΤΊ) and an end time (T'2),
- providing a video of said subject with said mobile device (100),
- if the video provided by said mobile device (100) contains temporary contrast changes, detecting at least the starting time (ΤΊ ; t1) of each of the detected contrast change sequences,
- comparing the starting time (ΤΊ , t1) of the detected contrast change sequence of the provided video to the starting time (T1) of the sequence of said contrast control, taking into account the shutter lag value (SL) of the mobile device (100), and
establishing if the video provided by said mobile device (100) corresponds to the video captured by the built-in camera (1 1 1, 121) of said mobile device (100).
2. The method of claim 1, wherein if the video provided by said mobile device (100) contains temporary contrast changes, detecting further also the end time (T'2; t2) of each of the detected contrast change sequences.
3. The method of claim 2, further comprising
- comparing further the end time (T'2, t2) of the detected contrast change sequence of the provided video to the end time (T2) of the sequence of said contrast control, taking into account the shutter lag value (SL) of the mobile device (100).
4. The method of claim 2, further comprising:
- comparing the time difference (T'2-ΤΊ , t2-t1) between said end time (T'2, t2) and said starting time (ΤΊ , t1) of the detected contrast change sequences of the provided video to the time period (P1) of the contrast control during said contrast change sequences.
5. The method of any of claims 1 to 4, wherein at least two contrast change sequences are used during said capture of the video of said subject with said built-in camera (1 1 1 , 121).
6. The method of any of claims 1 to 5, further comprising:
- rejecting the provided video as being not the one captured by the built-in mobile camera in case of un-matching or inconsistent contrast changes, based on analysis of any of or several of the following : the reflected surfaces positions, starting times, end times and/or periods between the detected contrast change sequence(s) and the contrast control sequences.
7. The method of any of claims 1 to 6, further comprising
- determining for the camera of the mobile device a mean shutter lag SLM value and a shutter lag variance SLV, defining together a default shutter lag range ([SL1 , SL2]), and
- checking for the provided video that the shutter lag detected stays within said default shutter lag range.
8. The method of any of claims 1 to 7, further comprising, before the capture of the video, determining randomly the number of contrast change sequences, starting times and periods of each of said contrast change sequences, taking into account a maximum time allowed to capture the video.
9. The method of any of claims 1 to 8, further comprising, during the video capture, determining the contrast change sequences according to the detection of specific subject poses.
10. The method of any of claims 1 to 9, further comprising, during the video capture, a contrast change activation in real-time when shadow areas are detected on the video.
1 1. The method of any of claims 1 to 10, wherein the temporary change of colour and/or luminance of the subject exposed to the built-in camera (1 1 1 , 121) of the contrast control is created by the mobile device (100).
12. The method of claim 1 1 , wherein the mobile device has a built-in flash, and wherein said temporary change of colour and/or luminance of the subject exposed to the built-in camera (1 1 1 , 121) of the contrast control is created by controlling the triggering time of said built- flash of the mobile device (100).
13. The method of claim 12, wherein said built-in flash is a frontal flash (121) of said mobile device (100).
14. The method of claim 1 1 , wherein said mobile device has a screen and wherein said temporary change of contrast of the subject exposed to the built-in camera (1 1 1 , 121) of the contrast control is created by changing the colour of said screen or a part of said screen of said mobile device and/or by changing the brightness of the screen (105) of said mobile device (100).
15. The method of any of claims 1 to 10, wherein the temporary change of luminance of the subject exposed to the built-in camera (1 1 1 , 121) of the contrast control is at least created by a light source which is external to the mobile device (100).
16. The method of claim 15, wherein said external light source is a class 1 laser beam.
17. The method of any of claims 1 to 16, further comprising:
- analysing the images of the video flow of the provided video and detecting the existence of any halo, and
- identifying a probable video replay fraud situation when a halo is detected.
18. The method of any of claims 1 to 17, wherein said mobile device (100) is a smartphone.
19. The method of any of claims 1 to 18, wherein the subject is an individual's face
20. The method of claim 19, wherein the detection of the contrast change sequences is implemented on the images of the captured video at least in the lips region.
21. The method of claim 19 or 20, wherein the detection of the contrast change sequences is implemented on the images of the captured video at least in the eyes region.
22. The method of claim 19 or 20, wherein the detection of the contrast change sequences is implemented on the images of the captured video at least in the nostrils region.
23. The method of any of claims 1 to 22, wherein the detection of the contrast change sequences is implemented on the images of the captured video in a three dimensional (3D) space.
24. A system for verifying if the video of a subject provided by a mobile device (10) is the video captured by the built-in camera (1 1 1 , 121) of said mobile device (100) comprising:
- a mobile device (100) equipped with built-in camera (1 1 1 , 121) with a shutter lag magnitude (SL), a display screen (105), a wireless communication adapter and a verification mobile application (151), wherein said mobile device (100) is able to implement the verification method of any of claims 1 to 18 by means of said verification mobile application (151).
25. A system according to the preceding claim, further comprising
- an anti-spoofing transaction server, and
- a transaction verification server,
said anti-spoofing transaction server being able to send a verification request to said verification server, said verification server being able to inform the individual about said verification request for the transaction verification server by means of the verification server sending a message to said individual's mobile device (100), and said mobile device (100) being able to implement the verification method of any of claims 1 to 18 by means of said verification mobile application (151).
PCT/IB2016/057477 2015-12-10 2016-12-09 A method and a system for determining if the video flow provided by a mobile device is the original one WO2017098457A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP16810078.2A EP3387637A1 (en) 2015-12-10 2016-12-09 A method and a system for determining if the video flow provided by a mobile device is the original one

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CH01798/15 2015-12-10
CH17982015 2015-12-10

Publications (1)

Publication Number Publication Date
WO2017098457A1 true WO2017098457A1 (en) 2017-06-15

Family

ID=57543111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2016/057477 WO2017098457A1 (en) 2015-12-10 2016-12-09 A method and a system for determining if the video flow provided by a mobile device is the original one

Country Status (2)

Country Link
EP (1) EP3387637A1 (en)
WO (1) WO2017098457A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850211A (en) * 2021-09-29 2021-12-28 支付宝(杭州)信息技术有限公司 Method and device for detecting injected video attack
FR3133245A1 (en) * 2022-03-03 2023-09-08 Commissariat à l'Energie Atomique et aux Energies Alternatives Secure image capture system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172419A1 (en) * 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
US20090207301A1 (en) * 2008-02-14 2009-08-20 Sony Ericsson Mobile Communications Ab Method of capturing an image with a mobile device and mobile device
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
WO2015059559A1 (en) 2013-10-25 2015-04-30 Onevisage Llc A method and a system for performing 3d-based identity verification of individuals with mobile devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020172419A1 (en) * 2001-05-15 2002-11-21 Qian Lin Image enhancement using face detection
US20090207301A1 (en) * 2008-02-14 2009-08-20 Sony Ericsson Mobile Communications Ab Method of capturing an image with a mobile device and mobile device
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
WO2015059559A1 (en) 2013-10-25 2015-04-30 Onevisage Llc A method and a system for performing 3d-based identity verification of individuals with mobile devices

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DANIEL F. SMITH ET AL.: "Face recognition on consumer devices: reflections on replay attacks", IEEE PUBLICATION, vol. 10, April 2015 (2015-04-01), pages 736 - 745, XP011575421, DOI: doi:10.1109/TIFS.2015.2398819
ESA HOHTOLA ET AL: "A C T A U N I V E R S I T A T I S O U L U E N S I S SOFTWARE-BASED COUNTERMEASURES TO 2D FACIAL SPOOFING ATTACKS", 18 June 2015 (2015-06-18), pages 57, XP055280495, Retrieved from the Internet <URL:http://jultika.oulu.fi/files/isbn9789526208732.pdf> *
RAGHAVENDRA R ET AL: "Presentation Attack Detection for Face Recognition Using Light Field Camera", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 24, no. 3, 1 March 2015 (2015-03-01), pages 1060 - 1075, XP011573100, ISSN: 1057-7149, [retrieved on 20150210], DOI: 10.1109/TIP.2015.2395951 *
SMITH DANIEL F ET AL: "Face Recognition on Consumer Devices: Reflections on Replay Attacks", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE, PISCATAWAY, NJ, US, vol. 10, no. 4, 1 April 2015 (2015-04-01), pages 736 - 745, XP011575421, ISSN: 1556-6013, [retrieved on 20150312], DOI: 10.1109/TIFS.2015.2398819 *
WEN DI ET AL: "Face Spoof Detection With Image Distortion Analysis", IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, IEEE, PISCATAWAY, NJ, US, vol. 10, no. 4, 1 April 2015 (2015-04-01), pages 746 - 761, XP011575418, ISSN: 1556-6013, [retrieved on 20150312], DOI: 10.1109/TIFS.2015.2400395 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850211A (en) * 2021-09-29 2021-12-28 支付宝(杭州)信息技术有限公司 Method and device for detecting injected video attack
FR3133245A1 (en) * 2022-03-03 2023-09-08 Commissariat à l'Energie Atomique et aux Energies Alternatives Secure image capture system

Also Published As

Publication number Publication date
EP3387637A1 (en) 2018-10-17

Similar Documents

Publication Publication Date Title
US10133943B2 (en) Online pseudonym verification and identity validation
CA2960397C (en) Systems and methods for liveness analysis
CN108764052A (en) Image processing method, device, computer readable storage medium and electronic equipment
JP2020534608A5 (en)
CN108805024A (en) Image processing method, device, computer readable storage medium and electronic equipment
JP2015503866A (en) Device and method for user authentication and user existence verification based on Turing test
KR20190040962A (en) Detecting spoofing attacks during live image capture
CN111225157B (en) Focus tracking method and related equipment
US10346682B2 (en) Method of authenticating documents by means of a mobile telecommunications terminal
KR101444538B1 (en) 3d face recognition system and method for face recognition of thterof
WO2017000494A1 (en) Iris identification method, iris identification system, and terminal
GB2501362A (en) Authentication of an online user using controllable illumination
WO2019163066A1 (en) Impersonation detection device, impersonation detection method, and computer-readable storage medium
JP2010045501A (en) Image monitoring device
WO2017098457A1 (en) A method and a system for determining if the video flow provided by a mobile device is the original one
US9500939B2 (en) Safety feature for projection subsystem using laser technology
US11216680B2 (en) Spoof detection via 3D reconstruction
CN114387674A (en) Living body detection method, living body detection system, living body detection apparatus, storage medium, and program product
CN106851403B (en) Display device for preventing pirate playing picture and content safe playing method
CN110874906A (en) Method and device for starting defense deploying function
TWI729679B (en) Authentication system, authentication device, and authentication method
US12020512B2 (en) Spoof detection using eye boundary analysis
WO2021109458A1 (en) Object recognition method and apparatus, electronic device and readable storage medium
US20230097348A1 (en) Spoof detection by correlating images captured using front and back cameras of a mobile device
CN112906440A (en) Anti-cracking method for living body identification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16810078

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016810078

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016810078

Country of ref document: EP

Effective date: 20180710