WO2016139655A1 - Method and system for preventing uploading of faked photos - Google Patents

Method and system for preventing uploading of faked photos Download PDF

Info

Publication number
WO2016139655A1
WO2016139655A1 PCT/IL2016/050198 IL2016050198W WO2016139655A1 WO 2016139655 A1 WO2016139655 A1 WO 2016139655A1 IL 2016050198 W IL2016050198 W IL 2016050198W WO 2016139655 A1 WO2016139655 A1 WO 2016139655A1
Authority
WO
WIPO (PCT)
Prior art keywords
challenge
user
photo
video
photos
Prior art date
Application number
PCT/IL2016/050198
Other languages
French (fr)
Inventor
Benedek NADAV
Original Assignee
I Am Real Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by I Am Real Ltd. filed Critical I Am Real Ltd.
Publication of WO2016139655A1 publication Critical patent/WO2016139655A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Definitions

  • the present invention relates to user identity verification and more particularly to verification of identity for picture uploading.
  • Dating websites for one, work hard at proofing their user base against people who invent profiles, put on photos of other people, and basically harass other clients. These sites are mostly unsuccessful at their attempts. The problem does not end with dating websites; user verification can be important to any site that offers communications between two or more individuals. For example, social networks, sites that don't want bots (software that automatically writes human like comments that also acts as a commercial) to write comments on the websites and much more.
  • Another very important aspect of this particular system is its ability to protect children from online harassment. Pedophiles are known to roam cyberspace and harass children. They often disguise themselves as children themselves; this system can easily prevent such cases with a simple challenge, either at the initial signup or as an individual challenge from one user to another.
  • Patent No. WO2011062339 describes a method for user authentication for a video communication system. This patent gets an image with a few people in, it then validates users through video and compares that face to one of the faces in a specific image. While this is an interesting take on image identification, it is still rather easy to fool using common photoshop tools. It does not use speech recognition and makes a user provide a photograph of himself with other people around him. This is not usually the case in dating sites, for example, where a picture posted on the site must be of a user's face alone.
  • US patent publication 2013/0061305 by Bruso et al. discloses a method for user verification by prompting an individual to perform a randomly selected challenge after passing first level verification, such as a password.
  • the random challenge presented to the user is to move his face relative to the camera, up or down, left or right, or in circular shape.
  • This authentication method can be easily outwitted by using a static photo in front of the camera and move it according to the random given challenge instruction.
  • the disclosed invention is of a method and system for the prevention of uploading personal faked photos.
  • the person whose photo is to be verified is asked to respond to a random challenge presented to him.
  • the challenge is comprised of plurality of random phrases which he has to loudly pronounce in front of a camera.
  • This type of challenge generates a video.
  • Each frame of the video is actually a static photo, features of which can be compared to the uploaded photo.
  • the same video is used to verify, by lips reading, that the same person actually said the required phrase. Thus it is impossible to outwit such a verification method.
  • the phrase challenge is randomly selected from pre prepared vocabulary of words, and it can be of a single word or any sequence of words. Furthermore, the challenge can be represented to the person by displaying written words, by displaying a picture of the word, by voice or any combination thereof.
  • the verification can be augmented by using audio recording of the person and by speech analyses verify that the requested phrase was said.
  • FIG. 1 presents general flow of the photo verification process.
  • FIG. 2 presents verification of one uploaded photo.
  • FIG. 3 shows the process for acceptance of a static photo for verification.
  • FIG. 4 shows detailed flowchart of one embodiment of photo verification.
  • FIG. 5 shows one embodiment of system components of photo verification.
  • a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/ or a computer.
  • a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/ or a computer.
  • an application running on a server and the server can be a component.
  • One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/ or distributed between two or more computers.
  • the terms to "infer” and “inference” refer generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example.
  • the inference can be probabilistic, that is, the computation of a probability distribution over states of interest based on a consideration of data and events.
  • Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/ or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • FIG. 1 illustrates a general flow 100 of the photo verification process.
  • the user can upload plurality of static photos.
  • the static photos are received by the system - step 110.
  • the user is then challenged with a random phrase - step 120.
  • the user is asked to provide a real time video and voice of his saying the random phrase presented to him - step 130.
  • Three parallel processes are carried out for verification.
  • match is looked for between the uploaded static photos and frames grabbed from the video.
  • Speech recognition algorithm - step 150 is executed to verify that the user said the requested phrase. Simultaneously, lips movement is analyzed to verify synchronization with the spoken phrase and "lips reading" of the spoken phrase - step 160.
  • Each verification process generates match score, all of which are combined in step 170 and the final decision is executed in step 180.
  • the random phrase can be presented to the user by voice message, by displayed text, by a picture of an object or any other means.
  • the phrase can be short (single syllable) or it can contain plurality of words.
  • the phrase "dog” can be presented by saying the word via the speaker in user's equipment, by displaying the word "dog” on the user's screen or by displaying a picture of a dog.
  • the vocabulary, from which the random phrase is built contains phrases which the computer, using machine learning algorithms, was trained to analyze.
  • voice input is not mandatory. However the user has to loudly say the presented phrase so that his lips move. The use of voice input can shorten the verification process. Also it should be emphasized that when the user loads more than one static photo, only those photos that pass the verification tests will be verified.
  • FIG. 2 presents a flowchart of processing verification of one uploaded photo.
  • the user uploads a photo in step 210.
  • the uploaded photo is analyzed in step 212 to find out whether or not it is adequate as a reference for verification. If the photo cannot be used as an adequate reference as checked in step 214, the user is informed in step 216. The user can then upload another photo or stop the uploading process. If the uploaded photo can be used as a reference, the web camera and voice input device (if exists) are turned on - step 218. Note that a live camera input is a must, whereas voice input is optional.
  • the verification process - step 220 is comprised of plurality of cycles, where in each cycle a single phrase (random digit) is displayed (either in numeric format or word format) to the user who has to loudly express the displayed digit. The number of cycles depends on the accumulated verification score. The process terminates either when the photo is verified or the maximum allowed number of cycles have been done without positive verification. The result is checked in step 222 and the photo is either verified - step 224 or denied step 226.
  • FIG. 3 presents a flowchart of the process for acceptance of a static photo for verification 300. It is a detailed description of the process executed in step 212 of FIG 2.
  • a static photo is loaded in step 310. Since the face is used for verification, the face is extracted from the loaded photo in step 312. If the face cannot be properly extracted, as tested in step 314, the photo upload is denied - step 320, and a proper information is given to the user. If the face is found in the photo then features of the face and the lips are analyzed. If the features found are adequate for comparison, the information is saved as reference for comparison, in step 322.
  • step 410 the total match score and the number of cycles are nullified. Steps 412 to 424 are repeated till either match is achieved and the uploaded photo is verified or the number of cycles is bigger than the defined maximum, as checked in steps 426 and 428 respectively.
  • each cycle one random phrase (digit) is displayed on the user's screen, using either word format (i.e. "one", "two” etc.) or as Arabic numerals (i.e. " 1", "2” etc.) - step 412.
  • word format i.e. "one", "two” etc.
  • Arabic numerals i.e. " 1", "2” etc.
  • Video of the user including his voice are recorded for predetermined period of time - step 416.
  • step 418 the recorded voice is analyzed to find out if the user said the correct digit.
  • a voice matching score which defines the level of certainty of the detected spoken digit is generated.
  • step 420 static features of the user's face are extracted from multiple frames of the recorded video. The values representing the same face feature from multiple frames are combined. The face features derived from the video frames are compared to those stored from the static uploaded photo, and a face recognition match score is generated.
  • lips reading algorithm is executed. A time sequence of lips features are extracted from the incoming video frames. These feature time sequences are compared to reference sequences stored for each phrase (digit). For the comparison, the video feature time sequences are first scaled in time relative to the reference feature time sequence. The features are comprised of lips vertical opening, lips width and lips shape, lips open area. Lip reading matching score is generated.
  • step 424 The three generated matching scores are combined in step 424 and produces cycle matching score which is added to the total computed matching score.
  • the total matching score is bigger than a predefined threshold score, as tested in step 426, the static presented photo is verified. Otherwise, if maximum number of comparison cycles were executed, step 428, and the loaded photo was not verified, then it is declared as not verified - step 432.
  • FIG. 5 An example of a system for performing photo loading verification is given in FIG. 5.
  • the system is comprised of a communication network 510, a server 520 and user devices 530, 540, 540 and 560.
  • User equipment can be a desk top computer 550, tablet 530, portable computer 560, smart cellular phone 540, or any other device that can communicate via wire ore wireless network 510 with the server, and which includes as a minimum, a video camera, and user display.
  • Voice channel is optional, because verification can be done without the use of voice recording.
  • the verification process is carried out in the server 520, which includes a database of "lip reading" reference information which is generated by a computer learning algorithm. It is important to note that the verification process can be done by software running on the server or by any combination of software and dedicated hardware, such as image processing hardware or voice processing hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A verification method for the validation of a specific person for a plurality of usages. The method uses face recognition through video, speech recognition and lip sync to validate a specific user against his own photograph. The system creates a random phrase for a user to speak in front of his camera. The system then corroborates the user's given photograph to his face in the video, and that it is indeed the user on camera by identifying to supplied random phrase and that the lip movements correspond to the phrase being spoken.

Description

METHOD AN D SYSTEM FOR PREVENTI NG U PLOADI NG OF
FAKED PHOTOS
FIELD OF THE INVENTION
[0001] The present invention relates to user identity verification and more particularly to verification of identity for picture uploading.
BACKGOUND OF THE INVENTION
[0002] The internet is a wonderful place to create new identities for oneself. However, many services provided through the internet require some safety when exposing clients to other people who may not be who they claim to be. The verification of the person with whom you're talking, or have business transactions is of the outmost importance.
[0003] Dating websites, for one, work hard at proofing their user base against people who invent profiles, put on photos of other people, and basically harass other clients. These sites are mostly unsuccessful at their attempts. The problem does not end with dating websites; user verification can be important to any site that offers communications between two or more individuals. For example, social networks, sites that don't want bots (software that automatically writes human like comments that also acts as a commercial) to write comments on the websites and much more.
[0004] To date most websites use only one form of validation - an image with alphanumeric code embedded into it (CAPTCAH); this is a code that computers cannot yet read and a human operator is required to enter the code manually. This form of validation might be enough, for now, for screening between actual people and bots. However, to take the example of dating sites, more is needed to prove that the person who registered to the site is who he (or she) actually is.
[0005] With sharing economy booming throughout the world, and especially in the United States, trust has become a major issue. These peer to peer financial systems require a high level of trust between the service provider and the customer. For example, Uber service providers (essentially cab drivers) put up a photograph but this photograph is not authenticated. Uber service providers can be asked to verify their photo online with a simple procedure using the system described in this patent. It is also possible for a customer to challenge an Uber service provider just before the pickup - preventing the danger of some driver using another's account. The same system can be used for other sharing economy trends, such as AirBnb, peer to peer loans - all to reduce risk. The system can also help reduce credit card fraud by challenging a user to verify his photograph with his own video image.
[0006] Another very important aspect of this particular system is its ability to protect children from online harassment. Pedophiles are known to roam cyberspace and harass children. They often disguise themselves as children themselves; this system can easily prevent such cases with a simple challenge, either at the initial signup or as an individual challenge from one user to another.
[0007] Many patents have been issued over the years for identity verification and validation; these often use only one form of validation. For example, voice recognition, image recognition, fingerprints and more. Prerecorded verification characteristics are saved as reference for a new verification request. Multimodal techniques are also used when high security verification levels are required.
[0008] US patent publication 2012/0281885 by Syrdal et all "System and Method for Dynamic Facial Features for Speaker Recognition" describes a user verification patent that uses both video and audio to validate a specific user. However, this particular patent is used for security purposes where a person is prerecorded on video where he speaks a specific phrase. The system described in this patent then compares various body movements and voice signature to the pre-recorded video of the same person for verification.
[0009] Us patent publication 2012/0284029 by Wang et al. "Photo Realistic Synthesis of Image Sequences with Lip Movements Synchronized with Speech" describes a verification technique where an individual reads a known script which is stored as reference, from which specific feature are extracted. For verification the person has to read the same sentence in front of a camera and a microphone and the obtained information is compared to the stored reference. Among other features the system compares the lips movements.
[0010] Us patent publication 2014/0279519 by Mates et al. "Method and System for Obtaining and Using Identification Information" describes a method for identifying a user by comparing a photo which is included in an identification document, such as driving license, which presented in real time in front of web camera, to the picture of the person taken at the same time with the web camera. [0011] US patent publication No. 20120140993 by Bruso et al "Secure Biometric Authentication From an Insecure Device" discloses a picture verification method where the user presenting a real time picture, is asked to input another picture of himself while making a simple gesture (e.g. closing the eyes, or providing fingerprint etc.). It uses static identification features of the person. If one provides fingerprints it can be verified against previously stored data.
[0012] Patent No. WO2011062339 describes a method for user authentication for a video communication system. This patent gets an image with a few people in, it then validates users through video and compares that face to one of the faces in a specific image. While this is an interesting take on image identification, it is still rather easy to fool using common photoshop tools. It does not use speech recognition and makes a user provide a photograph of himself with other people around him. This is not usually the case in dating sites, for example, where a picture posted on the site must be of a user's face alone.
[0013] US patent publication 2013/0061304 by Bruso et al. "Pre-Configured Challenge Actions for Authentication of Data or Devices" discloses a method for user verification by prompting an individual to perform a challenge known only to the individual. The expected response to the required challenge has to be pre-stored in the computer for comparison to the challenge executed in real time.
[0014] US patent publication 2013/0061305 by Bruso et al. "Random Challenge Action for Authentication of Data or Devices" discloses a method for user verification by prompting an individual to perform a randomly selected challenge after passing first level verification, such as a password. The random challenge presented to the user is to move his face relative to the camera, up or down, left or right, or in circular shape. This authentication method can be easily outwitted by using a static photo in front of the camera and move it according to the random given challenge instruction.
SUM MARY OF THE I NVENTION
[0015] The disclosed invention is of a method and system for the prevention of uploading personal faked photos. In the disclosed method the person whose photo is to be verified, is asked to respond to a random challenge presented to him. The challenge is comprised of plurality of random phrases which he has to loudly pronounce in front of a camera. This type of challenge generates a video. Each frame of the video is actually a static photo, features of which can be compared to the uploaded photo. The same video is used to verify, by lips reading, that the same person actually said the required phrase. Thus it is impossible to outwit such a verification method.
[0016] The phrase challenge is randomly selected from pre prepared vocabulary of words, and it can be of a single word or any sequence of words. Furthermore, the challenge can be represented to the person by displaying written words, by displaying a picture of the word, by voice or any combination thereof.
[0017] The verification can be augmented by using audio recording of the person and by speech analyses verify that the requested phrase was said.
[0018] BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 presents general flow of the photo verification process.
FIG. 2 presents verification of one uploaded photo.
FIG. 3 shows the process for acceptance of a static photo for verification.
FIG. 4 shows detailed flowchart of one embodiment of photo verification.
FIG. 5 shows one embodiment of system components of photo verification.
DETAILED DESCRIPTION
[0019] The innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the innovation can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate a description thereof.
[0020] As used in this application, the terms "component" and "system" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/ or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/ or distributed between two or more computers.
[0021] As used herein, the terms to "infer" and "inference" refer generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic, that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/ or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
[0022] Referring initially to the drawings, FIG. 1 illustrates a general flow 100 of the photo verification process. The user can upload plurality of static photos. In order to verify that the photos belong to the person who loaded them, the static photos are received by the system - step 110. The user is then challenged with a random phrase - step 120. The user is asked to provide a real time video and voice of his saying the random phrase presented to him - step 130. Three parallel processes are carried out for verification. In step 140, match is looked for between the uploaded static photos and frames grabbed from the video. Speech recognition algorithm - step 150 is executed to verify that the user said the requested phrase. Simultaneously, lips movement is analyzed to verify synchronization with the spoken phrase and "lips reading" of the spoken phrase - step 160. Each verification process generates match score, all of which are combined in step 170 and the final decision is executed in step 180.
[0023] The random phrase can be presented to the user by voice message, by displayed text, by a picture of an object or any other means. The phrase can be short (single syllable) or it can contain plurality of words. For example, the phrase "dog" can be presented by saying the word via the speaker in user's equipment, by displaying the word "dog" on the user's screen or by displaying a picture of a dog. Note that the vocabulary, from which the random phrase is built, contains phrases which the computer, using machine learning algorithms, was trained to analyze.
[0024] It should also be noted that voice input (microphone) is not mandatory. However the user has to loudly say the presented phrase so that his lips move. The use of voice input can shorten the verification process. Also it should be emphasized that when the user loads more than one static photo, only those photos that pass the verification tests will be verified.
[0025] In the following sections one embodiment of the disclosed invention is described. In order to simplify the description, the vocabulary from which the random phrases are selected, contains numeric digits, zero to nine, and a phrase includes just one numeric digit. This selection of vocabulary for the explanation purpose only and it does not, in any way, limit the scope of the disclosed invention. One skilled person in the art can build a vocabulary suited to his application.
[0026] FIG. 2 presents a flowchart of processing verification of one uploaded photo. The user uploads a photo in step 210. The uploaded photo is analyzed in step 212 to find out whether or not it is adequate as a reference for verification. If the photo cannot be used as an adequate reference as checked in step 214, the user is informed in step 216. The user can then upload another photo or stop the uploading process. If the uploaded photo can be used as a reference, the web camera and voice input device (if exists) are turned on - step 218. Note that a live camera input is a must, whereas voice input is optional. The verification process - step 220, is comprised of plurality of cycles, where in each cycle a single phrase (random digit) is displayed (either in numeric format or word format) to the user who has to loudly express the displayed digit. The number of cycles depends on the accumulated verification score. The process terminates either when the photo is verified or the maximum allowed number of cycles have been done without positive verification. The result is checked in step 222 and the photo is either verified - step 224 or denied step 226.
[0027] FIG. 3 presents a flowchart of the process for acceptance of a static photo for verification 300. It is a detailed description of the process executed in step 212 of FIG 2. A static photo is loaded in step 310. Since the face is used for verification, the face is extracted from the loaded photo in step 312. If the face cannot be properly extracted, as tested in step 314, the photo upload is denied - step 320, and a proper information is given to the user. If the face is found in the photo then features of the face and the lips are analyzed. If the features found are adequate for comparison, the information is saved as reference for comparison, in step 322.
[0028] In FIG.4 flowchart of one embodiment of the verification process is presented. This flowchart is a detailed description of the process executed in step 222 of FIG 2. In step 410 the total match score and the number of cycles are nullified. Steps 412 to 424 are repeated till either match is achieved and the uploaded photo is verified or the number of cycles is bigger than the defined maximum, as checked in steps 426 and 428 respectively. In the presented embodiment each cycle one random phrase (digit) is displayed on the user's screen, using either word format (i.e. "one", "two" etc.) or as Arabic numerals (i.e. " 1", "2" etc.) - step 412. The user is asked, in step 414, to face the camera and loudly speak the displayed digit. Video of the user including his voice are recorded for predetermined period of time - step 416.
[0029] In step 418 the recorded voice is analyzed to find out if the user said the correct digit. A voice matching score which defines the level of certainty of the detected spoken digit is generated. In step 420 static features of the user's face are extracted from multiple frames of the recorded video. The values representing the same face feature from multiple frames are combined. The face features derived from the video frames are compared to those stored from the static uploaded photo, and a face recognition match score is generated. In step 422 lips reading algorithm is executed. A time sequence of lips features are extracted from the incoming video frames. These feature time sequences are compared to reference sequences stored for each phrase (digit). For the comparison, the video feature time sequences are first scaled in time relative to the reference feature time sequence. The features are comprised of lips vertical opening, lips width and lips shape, lips open area. Lip reading matching score is generated.
[0030] The three generated matching scores are combined in step 424 and produces cycle matching score which is added to the total computed matching score. When the total matching score is bigger than a predefined threshold score, as tested in step 426, the static presented photo is verified. Otherwise, if maximum number of comparison cycles were executed, step 428, and the loaded photo was not verified, then it is declared as not verified - step 432.
[0031] An example of a system for performing photo loading verification is given in FIG. 5. The system is comprised of a communication network 510, a server 520 and user devices 530, 540, 540 and 560. User equipment can be a desk top computer 550, tablet 530, portable computer 560, smart cellular phone 540, or any other device that can communicate via wire ore wireless network 510 with the server, and which includes as a minimum, a video camera, and user display. Voice channel is optional, because verification can be done without the use of voice recording. The verification process is carried out in the server 520, which includes a database of "lip reading" reference information which is generated by a computer learning algorithm. It is important to note that the verification process can be done by software running on the server or by any combination of software and dedicated hardware, such as image processing hardware or voice processing hardware.
[0032] What has been described above is just one embodiment of the disclosed innovation. It is of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the innovation is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
[0033] Thus, one skilled in the art can use similar system and methodology to verify uploading of a video, by treating each frame of the uploaded video as an uploaded static photo, and decide to verify the video if few frames are verified as static photos.

Claims

CLAIMS What is claimed is:
1. A method for the preventing uploading of personal faked photos, the method is comprised of:
a) receiving the photos to be verified;
b) presenting a random phrase challenge to be loudly said by the photo loader; c) requesting live video of the person whose photo is to be uploaded while performing the challenge;
d) analyzing the video relative to the uploaded photo and perform lip reading while performing the challenge; and
e) evaluating matching levels and providing verification results;
2. The method described in claim 1, where the random phrase challenge is presented as written words, picture, voice or any combination thereof.
3. The method described in claim 1, where the challenge presented is to speak loudly presented series of randomly selected digits;
4. The method of claim 3 where the digits displayed are randomly presented either as Arabic numerical or word format;
5. The method of claim 2, where a voice recording of the spoken challenge is analyzed.
6. A computer program product comprising:
a) a computer-readable medium comprising:
i. code to request authentication information for a user;
ii. code to present an action challenge to a user;
iii. code to receive video response and audio response from the user iv. code for lip reading from video response; and
v. code to compare between images and verify response to action
challenge;
7. A method for the preventing uploading of personal faked photos, the method is comprised of:
a) receiving the photos to be verified;
b) presenting a random phrase challenge to be loudly said by the photo loader; c) requesting voice recording while performing the challenge;
d) analyzing the video relative to the uploaded photo;
e) perform speech recognition on the audio recorded while saying the requested phrase; and
f) evaluating matching levels and providing verification results;
PCT/IL2016/050198 2015-03-01 2016-02-21 Method and system for preventing uploading of faked photos WO2016139655A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL23749115 2015-03-01
IL237491 2015-03-01

Publications (1)

Publication Number Publication Date
WO2016139655A1 true WO2016139655A1 (en) 2016-09-09

Family

ID=56849297

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/050198 WO2016139655A1 (en) 2015-03-01 2016-02-21 Method and system for preventing uploading of faked photos

Country Status (1)

Country Link
WO (1) WO2016139655A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778179A (en) * 2017-01-05 2017-05-31 南京大学 A kind of identity identifying method based on the identification of ultrasonic wave lip reading
EP3460697A4 (en) * 2016-05-19 2019-05-08 Alibaba Group Holding Limited Identity authentication method and apparatus
WO2020017902A1 (en) * 2018-07-18 2020-01-23 Samsung Electronics Co., Ltd. Method and apparatus for performing user authentication

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138405A1 (en) * 2007-11-26 2009-05-28 Biometry.Com Ag System and method for performing secure online transactions
US20130051632A1 (en) * 2011-08-25 2013-02-28 King Saud University Passive continuous authentication method
US20140007224A1 (en) * 2012-06-29 2014-01-02 Ning Lu Real human detection and confirmation in personal credential verification
WO2014207752A1 (en) * 2013-06-27 2014-12-31 Hewlett-Packard Development Company, L.P. Authenticating user by correlating speech and corresponding lip shape
US20150026798A1 (en) * 2013-07-22 2015-01-22 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Electronic device and method for identifying a remote device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138405A1 (en) * 2007-11-26 2009-05-28 Biometry.Com Ag System and method for performing secure online transactions
US20130051632A1 (en) * 2011-08-25 2013-02-28 King Saud University Passive continuous authentication method
US20140007224A1 (en) * 2012-06-29 2014-01-02 Ning Lu Real human detection and confirmation in personal credential verification
WO2014207752A1 (en) * 2013-06-27 2014-12-31 Hewlett-Packard Development Company, L.P. Authenticating user by correlating speech and corresponding lip shape
US20150026798A1 (en) * 2013-07-22 2015-01-22 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Electronic device and method for identifying a remote device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3460697A4 (en) * 2016-05-19 2019-05-08 Alibaba Group Holding Limited Identity authentication method and apparatus
US10789343B2 (en) 2016-05-19 2020-09-29 Alibaba Group Holding Limited Identity authentication method and apparatus
CN106778179A (en) * 2017-01-05 2017-05-31 南京大学 A kind of identity identifying method based on the identification of ultrasonic wave lip reading
CN106778179B (en) * 2017-01-05 2021-07-09 南京大学 Identity authentication method based on ultrasonic lip language identification
WO2020017902A1 (en) * 2018-07-18 2020-01-23 Samsung Electronics Co., Ltd. Method and apparatus for performing user authentication
US11281760B2 (en) 2018-07-18 2022-03-22 Samsung Electronics Co., Ltd. Method and apparatus for performing user authentication

Similar Documents

Publication Publication Date Title
US10628571B2 (en) Systems and methods for high fidelity multi-modal out-of-band biometric authentication with human cross-checking
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
US10303964B1 (en) Systems and methods for high fidelity multi-modal out-of-band biometric authentication through vector-based multi-profile storage
US11023754B2 (en) Systems and methods for high fidelity multi-modal out-of-band biometric authentication
WO2021068616A1 (en) Method and device for identity authentication, computer device, and storage medium
US10977356B2 (en) Authentication using facial image comparison
US9712526B2 (en) User authentication for social networks
US11429700B2 (en) Authentication device, authentication system, and authentication method
EP2995040B1 (en) Systems and methods for high fidelity multi-modal out-of-band biometric authentication
US20180131692A1 (en) System and a method for applying dynamically configurable means of user authentication
CN105117622A (en) Authentication using a video signature
US20240048572A1 (en) Digital media authentication
WO2016139655A1 (en) Method and system for preventing uploading of faked photos
CN114677634A (en) Surface label identification method and device, electronic equipment and storage medium
CN112151027A (en) Specific person inquiry method, device and storage medium based on digital person
Islam et al. A biometrics-based secure architecture for mobile computing
US20220269761A1 (en) Cognitive multi-factor authentication
Emami et al. Use and acceptance of biometric technologies among victims of identity crime and misuse in Australia.
Firc Applicability of Deepfakes in the Field of Cyber Security
US9674185B2 (en) Authentication using individual's inherent expression as secondary signature
Zolotarev et al. Liveness detection methods implementation to face identification reinforcement in gaming services
US20210258317A1 (en) Identity verification system and method
KR20020076487A (en) A method for authentication of a person using motion picture information
US11955114B1 (en) Method and system for providing real-time trustworthiness analysis
US20230325481A1 (en) Method and System for Authentication of a Subject by a Trusted Contact

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16758549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16758549

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 11/09/2018)