US11341222B1 - System and method for securely viewing, editing and sharing documents and other information - Google Patents

System and method for securely viewing, editing and sharing documents and other information Download PDF

Info

Publication number
US11341222B1
US11341222B1 US16/542,451 US201916542451A US11341222B1 US 11341222 B1 US11341222 B1 US 11341222B1 US 201916542451 A US201916542451 A US 201916542451A US 11341222 B1 US11341222 B1 US 11341222B1
Authority
US
United States
Prior art keywords
user
document
person
authorized user
confidence level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/542,451
Inventor
Dexter A. Caffey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Eye Technology Inc
Original Assignee
Smart Eye Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Eye Technology Inc filed Critical Smart Eye Technology Inc
Priority to US16/542,451 priority Critical patent/US11341222B1/en
Assigned to Smart Eye Technology, Inc. reassignment Smart Eye Technology, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CAFFEY, DEXTER A.
Application granted granted Critical
Publication of US11341222B1 publication Critical patent/US11341222B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • H04W12/065Continuous authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/34User authentication involving the use of external additional devices, e.g. dongles or smart cards
    • G06F21/35User authentication involving the use of external additional devices, e.g. dongles or smart cards communicating wirelessly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • H04L67/38
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/105Multiple levels of security

Definitions

  • the present invention allows for the creation of a biometrically secure environment that allows viewing, editing and sharing of confidential documents, or the like, in public places, without worrying that someone will see the contents.
  • the invention provides privacy, for example for the purposes of reading documents, in a public environment while having the confidence that you are the only one able to read the document. Privacy may be achieved through methods of identification using biometric features, such as: face, iris or voice recognition. Verification that a real person is viewing the document may also be achieved by pulse recognition. In one embodiment, the screen will shut down when more than one person looks directly at the screen.
  • a second layer of security in the form of liveness checks may also be provided. For example, this may be accomplished using pulse detection, in one embodiment.
  • the present invention may be implemented as a hardware product, a software product, or a combination thereof.
  • the present invention may be implemented on a variety of platforms, such as mobile device, tablet, laptop, desktop, etc., using a camera, microphone, etc.
  • biometric Identification gives rise to a solution serving both purposes.
  • biometric technologies makes sure that the user will never forget a password again and simultaneously provides a very high level of security.
  • Biometric identification methods include, amongst others, face recognition (identification based on an image or video of a subject's face), iris recognition (identification based on an image or video of a subject's iris), fingerprint (identification based on a subject's finger print), voice recognition (identification based on a voice sample of the subject) or combination of any of the above methods, or other methods.
  • a liveness check allows a biometric application to discriminate between the real biometric factor of a subject and artificial copies of those features making up the biometric factor. Liveness detection reduces the likelihood that spoofing attempts will succeed, and as such reduces the false acceptance rate.
  • An example of a liveness check is facial recognition software requiring the subject to blink with his or her eyes, and smile or give a head nod. However, even these gestures are also quite easily faked using a mask with holes in it.
  • a secure environment must be protected at all times because identifying a person in the beginning of a document viewing/editing is not enough.
  • a solution according to the present invention is that during the viewing/editing of the document, the identification process must run in the background and yet provide the highest possible level of security by performing an identification screening every X seconds.
  • a backup identification is offered as a precaution in the event that the continuous identification fails.
  • a secure environment should provide complete confidentiality, which means that it is necessary to determine that only one person looks at a device in a given time.
  • the system may provide an alert or at certain levels of confidence, even shut off the screen.
  • Behavioral profiling may be used in order to give the user the best secure environment, while not harming the convenience of use and improving the user experience.
  • Behavioral profiling is a definition of a person by his habits, location, WIFI to which he is often connected, often visited sites, matter of text input, the way in which a person holds his phone, etc. It is best to secure documents by using a behavioral profiling score for defining security levels that need to be applied. For example, by using GPS coordinates or a WI-FI network to define a score that gives lower score if a user is in a public place, the security level needs to be set on a higher level. A medium score would mean a person is in a work place, and the security would be at a medium level. A high score corresponds to, for example, the user being at their home, for which will result in a lower level of security.
  • behavioral profiling scores can be calculated according to a sensor that appears in a device.
  • a sensor that appears in a device.
  • a location sensor gyro sensor, Wi-Fi/mobile module may be provided that can provide relevant information.
  • location For laptops or desktop, location may, for example, be obtained according to IP address, the browser history may be checked, and many more components can be learned from different sensors and modules.
  • Behavioral profiling can be used in addition to existing biometric methods but, in one embodiment, cannot supply a secure environment if biometric methods do not exist.
  • FIG. 1 is an overview of the enrollment and verification/recognition process of the present invention.
  • FIG. 2 is a typical facial biometric flow of the present invention.
  • FIG. 3 depicts common fingerprint patterns.
  • FIG. 4 depicts fingerprint classification features.
  • FIG. 5 depicts a typical fingerprint recognition process.
  • FIG. 6 depicts a typical flow of speaker biometrics.
  • FIG. 7 depicts an eye and its various parts.
  • FIG. 8 depicts a typical flow of an iris recognition process.
  • FIG. 9 depicts a typical smartphone with iris scanning capabilities.
  • FIG. 10 depicts a first process that may be performed by the present invention.
  • FIG. 11 depicts a second process that may be performed by the present invention.
  • FIG. 12 depicts a third process that may be performed by the present invention.
  • FIG. 13 depicts a typical flow of an enrollment process of the present invention.
  • FIG. 14 depicts a typical flow of the secure phase of the present invention.
  • FIG. 15 depicts a typical architecture of the present invention.
  • FIGS. 16A - 16 LLL depict various typical screenshots of the present invention, with corresponding function.
  • Biometric technologies are the science of detecting and recognizing human characteristics using technologies that measure and analyze individual biological data. The way we are genetically composed creates identifiable traits that can uniquely represent us as individuals. DNA can be used to distinguish between two individuals apart for identical twins. Some biometric traits such as fingerprints and iris prints are distinctive even among identical twins.
  • FIG. 1 provides an overview of a typical biometric enrollment 100 and verification and recognition flow 110 .
  • a user's biometric information is presented, captured in step 102 , processed in step 103 , and stored in a database 104 .
  • biometric information is presented 111 and captured (step 112 ), processed in step 113 , and in step 114 the biometric data processed in step 113 is compared to the enrollment biometric data stored in the database 104 .
  • the result is either a match ( 115 ) or no match ( 116 ).
  • biometric processes and techniques including: for example: facial biometrics, finger print recognition, speaker biometrics, liveness checks, iris recognition, etc.
  • the face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
  • FIG. 2 A typical facial biometric flow is depicted in FIG. 2 , and described further below.
  • Face detection is the identification of human faces in digital images. In essence, it is the task of defining bonding boxes that surround human faces. Face detection is a crucial pre-processing step in any face verification system. Detecting faces in the image allows the extraction of faces from the image and performs an analysis on each face alone and removes it from the background, allowing the system to perform the analysis on each face in the image separately.
  • step 201 an image is captured, in step 202 , a face is detected (as described in further detail, below), in step 203 , face alignment is performed, and feature extraction is performed in step 204 .
  • step 205 an image of a new face may be matched to the features of faces stored in database 206 , and in step 207 a decision is made as to whether the new face matches a particular stored face.
  • face detection may be the tool used to determine whether only a single person is currently viewing the screen (a single face is located in the frame captured by the camera) or multiple people.
  • MTCNN Multi-Task Cascaded Convolutional Networks
  • YOLO Real-Time Object Detection is a concept recently introduced in the field of real time object detection.
  • YOLO divides a given image into a square grid, predicts bounding boxes for each grid cell and predicts a class probability for it. As a final step, the bounding boxes are merged and a final class prediction is given.
  • YOLO is known to work in real time rate even when running on relatively simple devices. Seeing as the face viewed may be an object in a digital image, the use of an advanced, state of the art detection system would be a suitable choice.
  • Face alignment is the task of deforming two facial landmarks such that both will eventually have the same orientation.
  • the task is vital in order to compensate for the high variability of poses in which the face may be captured. It allows for face verification to be feasible without having the user enroll in all possible poses to the camera.
  • the procedure usually involves the use of facial landmarks and creates the best possible correspondence between these landmarks.
  • One Millisecond Face Alignment with an Ensemble of Regression Trees is when the framework learns face landmarks in preparation for the actual face alignment, thus allowing for an initial extraction of facial landmarks from the image and aligning the image according to said landmarks. This a priori step allows for real time performance.
  • the alignment process itself may use a cascaded regression pipeline.
  • Deep Alignment Network A convolutional neural network for robust face alignment Deep Alignment Network (DAN) is deep neural network that includes several stages. Each single stage of DAN includes a feed-forward neural network which performs landmark location estimation and connection layers that generate the input for the next stage. DAN moves between all stages 3 inputs, input image which has been warped so that the current landmark estimates are aligned with the canonical shape, a landmark heatmap and a feature image.
  • the advantage of DAN is that it extracts features from the entire face image rather than the patches around landmark locations.
  • An additional advantage is the use of a landmark heatmap that is an image with high intensity values around landmark locations where intensity decreases with the distance from the nearest landmark.
  • Feature extraction is the stage in which a biometric vector is extracted from the image to later be used by the matcher to determine whether two facial images are identical.
  • FaceNet A Unified Embedding for Face Recognition and Clustering.
  • FaceNet is a system that directly learns mappings from image faces to a compact Euclidean space where distances directly correspond to a measure of face similarity.
  • the framework goes through a learning procedure allowing it to extract features capable of differentiating different facial images. This framework is based on the use of positive and negative examples of facial images. For each user, an anchor is determined. The framework attempts to bring positive (images of the same individual) “closer” in the formed feature space and negative (images of different individuals) “further away”. Said framework allows to determine if two facial images came from the same source or from different sources.
  • This framework treats the extraction stage of the features as a two stage operation.
  • a neural network extracts features from the facial image.
  • three networks embed asymmetrical and complex facial features.
  • This framework extracts facial embedding that can be fed into a matcher deciding if two faces came from the same source or from different sources.
  • Feature matching is the matching of feature vectors extracted from two different images.
  • the matching may be performed in various different ways, including but not limited to:
  • a fingerprint in its narrow sense is an impression left by the friction ridges of human finger.
  • the fingerprint pattern is permanent and unchangeable.
  • the probability that fingerprints of two individuals being alike is approximately 1 in quadrillion.
  • FIG. 3 depicts common fingerprint patterns.
  • Minutiae classification is based on:
  • FIG. 4 depicts fingerprint classification features, such as Crossover 401 , Core 402 , Bifurcation 403 , Ridge Ending 404 , Island 405 , Delta 406 and Pore 407 .
  • FIG. 5 depicts a typical fingerprint recognition flow, described in further detail thereafter.
  • step 501 a fingerprint is captured, in 502 pre-processing occurs, features are extracted in step 503 , and fingerprint matching is performed in step 504 , based on stored fingerprints in database 505 .
  • Two main technologies may be used to capture image of the fingerprint
  • Pre-process is the method that prepares the image to facilitate further work with the image.
  • the pre-process can include the enhancement of the image, the binarization of the image, finding region of interest (ROI), thinning of the fingerprint image, detecting core point, minutia extraction.
  • ROI finding region of interest
  • Thinning of the fingerprint image Generally, the gray values of pixels of ridges in the fingerprint image gradually decrease, going from an edge towards the center of the ridge line, then, increase again going towards the other edge. This represents the definition of a local minimum. The idea is to capture this local minimum line to convert the ridge of (e.g. 5) pixels wide into one pixel wide.
  • Core point detection The core (or singular) point of a fingerprint is defined as “the point of the maximum curvature on the convex ridge”, which is usually located in the central area of fingerprint. The reliable detection of the position of a reference point can be accomplished by detecting the maximum curvature.
  • Minutiae extraction Most fingerprint minutia extraction methods are thinning based where the skeletonization process converts each ridge to one pixel wide. Minutia points are detected by locating the end points and bifurcation points on the thinned ridge skeleton based on the number of neighboring pixels. The end points are selected if they have a single neighbor and the bifurcation points are selected if they have more than two neighbors.
  • Feature is a piece of information that describes a specific part in input image.
  • an extraction of the features is performed that create unique ID code for each of the fingerprint.
  • Extracted features are used in the final matching feature stage to perform fingerprint recognition.
  • the features of the fingerprint will be represented by the number of minutiae of each type within a specific distance from the core point. This is achieved by dividing the fingerprint image into concentric tracks around the core point.
  • Fingerprint matching is the process used to determine whether two sets of fingerprint feature come from the same finger. One fingerprint feature is stored into the database and other is computed from acquisition image of fingerprint.
  • the matching may be performed in various different ways, including but not limited to:
  • Advantages of using fingerprint recognition include that its very fast technology, and the probability that fingerprints of two individuals being alike is approximately 1 in quadrillion.
  • the disadvantages include that you must have a fingerprint scanner, and acidity can change a fingerprint.
  • Speaker recognition is the identification of a person from characteristics of voices. It is also called voice recognition. There is a difference between speaker recognition (recognizing who is speaking) and speech recognition (recognizing what is being said). These two terms are frequently confused, and “voice recognition” can be used for both.
  • Speaker verification may be performed for text dependent (unique pass phrase for a user) or for text independent (the user is verified based on voice print alone, independently of what is said). This section will focus solely on methods for text independent methods as we see these methods as the future of the field. Moreover, using text independent recognition, one may form a second layer of pass-phrase matching to convert it into a text-dependent method.
  • Voice Activity Detection is the process in which voiced segments are extracted out of the entire speech signal. Speaker analysis should preferably be performed only on segments recognized as speech as silent segments are shared amongst all speech signals.
  • MFCC Mel-Frequency-Cepstral-Coefficients
  • FIG. 6 depicts the typical flow of speaker biometrics, described in further detail hereafter.
  • two overall processes may be used for speaker biometrics—enrollment 600 and verification/recognition 610 .
  • step 601 of the enrollment process the speech biometrics of a user to be enrolled may be presented.
  • Features of the speech may be extracted in step 602 , and a model may be trained (as will be described in further detail below) in step 603 .
  • a voiceprint is created and stored in step 604 .
  • a person's speech biometrics may be presented in step 611 , features are extracted in step 612 , and in step 613 , the extracted features are compared to the voiceprints (stored in step 604 ), resulting in a decision—match ( 614 ) or no match ( 615 ).
  • MFCC Mel-Frequency-Cepstral-Coefficients
  • the system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM.
  • GMM global model
  • UBM universal background model
  • Bayesian adaptation to derive speaker models from the UBM.
  • This method requires an initial learning step in which a UBM is created.
  • This model is meant to capture the total variance of all possible speakers. The result is a model of the distribution of all background models.
  • an utter is tested for the probability of it belonging to a set of speaker features or of it belonging to the UBM, and a decision is made based on the likelihood ratio. More recent approaches use SVM (support vector machines) and deep networks for binary classification for the task of verification.
  • Liveness check is a method that comes to verify that a real person is trying to perform biometric recognition to enter the device.
  • a liveness check method is used in addition to some biometric recognition method (like face recognition, Iris recognition, voice recognition). Liveness detection reduces the likelihood of spoofing attempts to succeed, and as such reduces the false acceptance rate. Examples of liveness checks are described below.
  • Pulse recognition may be extracted from video, for example using known techniques.
  • Blinking a person may be asked to blink and then a camera may be used to recognize when a person is blinking.
  • Voice recognition ask a person to say one of random sequences, that appears in data base, and then match recorded voice pattern with voice pattern that appears in data base.
  • the iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. It is perforated close to its center by a circular aperture known as the pupil.
  • the function of the iris is to control the amount of light entering the eye by controlling the diameter and size of the pupil.
  • the eye color is defined by that of the iris.
  • FIG. 7 depicts an eye and its various parts—for example, an iris 701 , pupil 702 and sclera 703 .
  • FIG. 8 depicts a typical flow of an iris recognition process, described in further detail hereafter.
  • image acquisition 801 deals with capturing a sequence of iris images from the subject using cameras and is desirable with sensors that have high resolution and good sharpness. Today it is possible to do so using the front smartphone camera or a modern camera, as examples.
  • the minimum size of iris needs to be at least 70 pixels, in one embodiment.
  • the maximum distance that the phone can be placed from the eyes is around 50 centimeters.
  • the iris needs to be clearly visible in the image, in one embodiment.
  • FIG. 9 depicts a smartphone 901 (in this case made by Samsung), with iris scanning capabilities.
  • iris segmentation 803 is used to locate the valid part of the iris for iris biometrics, including finding the pupillary and limbic boundaries of the iris, localizing its upper and lower eyelids if they occlude, and detecting and excluding any superimposed occlusions of eyelashes, shadows, or reflections. Iris segmentation can greatly affect the iris features that are needed for iris recognition.
  • Iris segmentation may be performed in various ways, as described in further detail below.
  • the iris region can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary.
  • a combination of circular and linear Hough transforms can be used. The use of the two transforms yields a valid approximation of the iris in the image.
  • Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment uses a two-stage iris segmentation method based on convolutional neural networks (CNN), which is capable of robustly finding the true iris boundary in intense cases with limited user cooperation.
  • the first stage is pre prospecting which includes bottom-hat filtering, noise removal, canny edge detector, contrast enhancement, and modified Hough Transform to segment the approximate the iris boundary.
  • the second stage is a deep CNN that is used as input image with fixed size and then fits the true iris boundary. This second stage applies only on region of interest (ROI) that is defined by the approximate iris boundary detected in the first stage.
  • ROI region of interest
  • the choice to apply the second stage only on ROI reduces the processing time and error of iris segmentation.
  • the result of the segmentation stage can be given to iris feature extraction for future matching.
  • a normalization stage 804 may be performed, in which the bounded iris is converted to an accepted iris template. This is needed, in one embodiment, for future matching 807 between the iris template and iris templates from database 806 .
  • the process is a conversion from Cartesian to non-concentric polar representation of the iris template.
  • Daugman's Rubber sheet Model by Daugman ensures the proper handling of the matter due to the specifics of the iris.
  • This model converts the iris area from Cartesian representation to polar representation that maps each pixel in the iris area into a pair of polar coordinates (r, ⁇ ), where r and ⁇ are on the intervals of [0 1] and [0 2 ⁇ ] respectively. It accounts for size inconsistencies and pupil dilation of the iris area, but does not compensate for rotational inconsistencies between templates.
  • the output of this stage is an iris template with polar representation that is consistent with template sizes in the data base.
  • Wildes has proposed an image registration technique for normalizing iris textures. A newly acquired image would be aligned with an image in the database, and a comparison will be performed.
  • the alignment process is a transformation using a choice of mapping function. The alignment process compensates the rotation and scale variations. It must be noted that the normalization is performed in the matching time.
  • Feature is a piece of information that describes a specific part in an input image.
  • an extraction of the features is performed, that creates a unique ID code for each of the iris normalized representation. Extracted features are used in a final matching feature stage 807 to perform iris recognition (with a result of match 808 or no match 809 ).
  • the verification/recognition stage 810 may be performed. First, image acquisition 811 is performed, as well as an optional pre-processing 812 step. Iris segmentation 813 , image normalization 814 and feature extraction 815 are then performed. Finally, the matching decision 807 is performed, based on the database of 806 .
  • Gabor Filters To extract features from an iris pattern, in polar representation, may use a demodulation process. Local regions of an iris are projected onto quadrature 2-D Gabor wavelets, generating complex valued coefficients whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantized to one of the four quadrants, setting two bits of phase information. This process is repeated all across the iris with many wavelet sizes, frequencies, and orientations to extract vector of 2048 phase bits (256 bytes) are computed for each iris. In one embodiment, this method is used only to phase information for recognizing irises because amplitude information is not very discriminating.
  • Iris Recognition with Off-the-Shelf CNN Features A Deep Learning Perspective. To train a new deep network, a large amount of data is needed. These databases can be paid or not at all, that is, they have not yet been created or they are still small. Therefore, this method suggests using one of the best known deep networks, such as AlexNet, VGG, GoogLeNet and Inception, ResNet or DenseNet. These deep networks are already trained on huge databases with a large number of classes. These deep networks are designed to recognize visual patterns directly from pixel images with minimal preprocessing. To achieve perfect performance that is close to human's recognition performance, these deep networks extract unique features that help to later recognize a detected object and classify this object to the classes with the similar features. So, this method suggests using the aforementioned deep networks prior to the stage of classification, namely until the stage of feature extraction. And then this feature of Iris recognition may be used for matching with features in database.
  • AlexNet AlexNet
  • VGG GoogLeNet
  • ResNet ResNet
  • DenseNet Dens
  • a multi biometric iris recognition system based on a deep learning approach is a real-time multimodal biometric system that is called IrisConvNet. Its architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input images of both the right and left irises of a person without any domain knowledge.
  • CNN Convolutional Neural Network
  • Softmax Softmax classifier
  • Feature matching means finding corresponding features from two similar feature vectors based on a search distance.
  • one of the feature vectors is named source and the other target.
  • the feature matching process analyzes the similarity of source and target by using a suitable method.
  • the accuracy of feature matching depends on data similarity, complexity, and quality. In general, the more similar the two datasets, the better matching results.
  • Hamming distance employed by Daugman.
  • the Hamming Distance uses XOR to measure how many bits disagreed between two templates.
  • To get invariance to the rotation when calculating hamming distance of two templates one template is shifted left and right bit-wise and a number of hamming distance values are calculated from successive shifts. The actual number of shifts required to normalize invariance to the rotation will be determined by the maximum angle difference between two images of the same eye.
  • One shift is defined as one shift to the left, followed by one shift to the right.
  • Deep neural networks for iris recognition system based on video stacked sparse auto encoders (SSAE) and bi-propagation neural network models.
  • SSAE Stacked Sparse Auto Encoders
  • SSAE Bi-propagation Deep Neural Network
  • the present invention may allow for various functions and embodiments, such as the following three functions, each of which can be implemented in a variety of ways.
  • FIG. 10 depicts Function 1—“Keep your eyes off my screen”, in one embodiment.
  • an authorized user of a device (smartphone, laptop, desktop, tablet, etc.) looks at the screen of the device, as usual.
  • the present invention is continuously operating (using biometric technology) to ensure that only the authorized user is viewing the screen of the device. While iris scanning technology is depicted in FIG. 10 , any other biometric technology, as described previously, may be used for this purpose, in other embodiments.
  • step 1003 when an unauthorized user is detected, the screen of the device is turned off, turned black, or otherwise deactivated.
  • the system continues to scan to verify whether only the authorized user is in view.
  • step 1005 if only the authorized user is detected, the screen is turned back on.
  • FIG. 11 depicts Function 2—“Keep your eyes off my document”.
  • the user sets the security level for a particular document.
  • the device operated by the recipient of the document scans (such as by using the camera on the device) biometric characteristics of the user of the device (such as by using iris detection, as shown in FIG. 11 , but any other biometric detection may be used).
  • the device once the device recognizes the recipient using the biometric scan, the document opens for the recipient.
  • a co-worker, stranger, etc. approaches the recipient, and tries to view the document on the device, as well.
  • step 1105 the device detects the stranger, co-worker, etc., and closes the document (or blacks out the screen) immediately.
  • step 1106 once the stranger, co-worker, etc. is no longer in the sight of the device, the document will re-open for the authorized recipient to view, edit, etc.
  • the invention may use a tracking process.
  • the invention checks and tracks the new face to determine if it is a registered user of the present invention. If a person appears in the camera's frame whose face isn't recognized as a registered user of the present invention, the screen will shut down, in one embodiment. In other words, because there was a person(s) in the camera's frame whose face(s) wasn't registered, the invention doesn't recognize the match, and may shut down the screen.
  • FIG. 12 depicts Function 3—“Special glasses for viewing the screen”.
  • the owner of the device puts on special glasses.
  • the screen on the device will look scrambled to people who are not wearing the special glasses, which have been designed specifically for that user.
  • step 1203 assuming the authorized user is wearing the special glasses designed for her/him, that user will be able to securely view the screen without any problems.
  • FIG. 13 depicts an example flow of an enrollment process according to the present invention, in one embodiment.
  • FIG. 13 will be described in further detail, below, according to one embodiment.
  • a user may Log-In to an existing account ( 1301 ) or register a new one ( 1302 ). If the user Logs-In to the application, the Cloud loads all necessary data for user identification ( 1303 ). If the user chooses to create an account, he or she may go through the enrollment phase beginning at step 1304 .
  • Enrollment phase 1305 Application asks user to define secure profiles:
  • user may choose secure profile mode (automatic 1307 or manual 1308 ).
  • secure profiles can be used. They will determine the security level of each document and of all secure environments; for example, in a public place the security level may be the highest, whereas at home it may be the lowest.
  • Table 1 below provides an example of different secure profile levels which may be used with the present invention. This is just one example—these secure profiles may be implemented in a variety of ways.
  • FIG. 14 depicts an example of the flow of the secure phase 1315 , in one embodiment of the invention.
  • FIG. 14 will be described in further detail, below.
  • the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place ( 1402 ) or not ( 1403 ).
  • the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place ( 1405 ) or not ( 1406 ).
  • the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place ( 1411 , in which case the file is shared in step 1413 ) or not ( 1412 ).
  • continuous identification 1408 is performed according to which secure profile is activated, maintaining confidentiality ( 1409 ).
  • the document may be encoded before sharing, and decode can be performed by using same secure application.
  • the most secure documents can't be downloaded to the device in this situation. Also, in offline mode, and in one embodiment, not all biometric recognition methods will be available in this situation.
  • the downloaded documents may be merged with documents in the Cloud, and all secured documents will be erased.
  • the present invention provides security of documents during storage, and also security while reading or editing the document.
  • the security of the documents may be made through an application/software. As described previously, at the opening of an application/software the user will have to pass a primary person identification.
  • Primary person identification may be based on all (or a subset) of the biometric identification technologies described previously, such as face recognition, fingerprint, voice recognition, iris recognition. Moreover, primary person identification may be dependent on secure profile level. For example, a higher secure profile level may be required for identification of a person's identity. In one embodiment, the identification method may be chosen randomly every time when trying log in to the application. To achieve better security, in additional to the primary person identification, a liveness check may be dependent on secure profile level. For a higher secure profile level, more methods of identification of a person's identity may be required.
  • a user may be required to register with the application in which the user is required to perform an enrollment phase (as described previously) that includes: enrollment of iris for iris recognition, enrollment of face for face recognition, enrollment of speech for voice recognition and enrollment of finger for fingerprint recognition.
  • the enrollment phase may be necessary if the device already has some biometric signature, given the need to get the most updated biometric signature.
  • Additional personality recognition may be performed by device verification, such as by:
  • a secure profile associated with each document may be used to determine the security level for the documents, and for all secure environments. User can change existing secure profiles as needed. The user will be able to determine, for each individual document, the level of protection and methods for recognizing the user. For each document, biometric personality recognition may be used, depending on the security level of the document.
  • continuous identification may be performed that is based, for example, on iris recognition technology to verify that the authorized person is still working with the device.
  • a continuous identification process may run at all times in the background—for example, iris recognition may be performed periodically (such as every 10 seconds, or another suitable interval). Having the iris recognition perform only periodically balances effectiveness and performance. For the iris recognition to perform properly, the person's eyes must be open. If the person blinks then iris detection may fail to detect the person's iris. In this case, backup identification that is based on another biometric parameter, such as face recognition, may be performed to verify that an authorized person is still working with the device.
  • another biometric parameter such as face recognition
  • face detection technology may be used to verify that only an authorized person is looking to the device screen. This process may run at all times (or periodically) in the background and search for additional faces that enter into the camera range, and then check for how long this extra face is in the camera range. If the face is in the camera range less than, for example, 2 seconds then no action may be taken, in one embodiment. Otherwise, the present invention may show a warning on the screen that notifies the owner that someone else is looking at his phone or device. If within, for example, a 10 second period after the message appears, no response is received from the owner, then the device screen will automatically turn off. Additionally, when a warning appears, the owner will have the option to pause “confidentiality” and “continuous identification” to show the document to the companion. User will be able to change the delay and related options.
  • the present invention may also try to identify this second person using, for example, face recognition. If it turns out that this second user already has permission to see the document (the user defined him in the enrollment phase), then the alert may automatically be removed. Otherwise, the screen may be turned off it there is no response from the owner's side.
  • Pause Security is an option to allow the pause of the “confidentiality” and “continuous identification” feature—for example, if an authorized user wants to show the document to a companion.
  • the Pause Security option can be enabled when the system detects an additional face in the camera range or when the owner enables it from the settings. In one embodiment, and for additional security, to enable Pause Security the user may need to perform one random recognition.
  • all documents and all biometric vectors may be stored in the cloud, in one embodiment. This means that in this embodiment the full engine will be based in the cloud to prevent identity theft from the device. But for situations where access to the Internet is not available, an offline mode may be turned on. In order to be able to continue working offline it may be necessary to download a document and biometric vector to a local disk before offline mode will be activated.
  • the present invention is able to determine which documents can be viewed or edited.
  • an option may be included whereby especially important documents would not be able be downloaded to the device. This option may exist because offline capabilities will be available for the limited possibilities of biometric identifications and because the device may not have all necessary hardware to support all secure options.
  • the continuation and backup identification function may be available.
  • the downloaded document may be synchronized with the document in the cloud, and after synchronization is complete, documents and biometric vectors may be erased automatically from the local disk.
  • the security level of the downloaded document is 0, then it can be stored on the device for an unlimited time but after X time biometric vector will be erased automatically from the local disk. In one embodiment, user must determine X time before enabling offline mode.
  • the user may be able to create/select a folder and define it as a secure zone for downloaded documents and biometric vector.
  • the user can add applications (for example eMail/word/pdf) to a secure environment.
  • applications for example eMail/word/pdf
  • Different types of information about the document may be stored in the cloud, for example: the date the document was edited, the name of the last editor, etc.
  • Document sharing may be possible, but only through the application, which means that the receiving party must also be authorized in the application/software and be able to open the document with its biometric identification.
  • behavioral profiling may also be used as a passive method of owner recognition.
  • a device such as a mobile device (laptop 1501 , tablet 1503 , smartphone 1504 , etc.), a desktop computer 1502 , or any other type of device may have an associated display device or screen, through which a document or any other information may be displayed to a user.
  • the device(s) may have their own storage (hard disk, solid state drive, memory, etc.), and may also communicate with storage and functionality in a remote location, such as through a cloud computing platform 1510 . Documents and other data may be transmitted between the devices and the cloud, and may also be stored locally on the device(s) itself.
  • An app developed in accordance with the teachings of the present invention may be developed using appropriate iOS, Android or other types of tools. If iOS or Android, the resulting app may appear in the iOS and/or Android store as a B2C app and will function accordingly:
  • the system may use a cloud service (such as Amazon AWS cloud) to store all of their data.
  • a cloud service such as Amazon AWS cloud
  • sample screenshots in one embodiment from, for example, a smartphone operating in accordance with the present invention are depicted in FIGS. 16.1 to 16.64 .
  • FIGS. 16.1 through 16.64 are described in further detail below. Again, the ordering and operations described below are simply illustrative—variations are possible while still operating within the scope of the present invention.
  • FIG. 16A This depicts a sample home screen where the app of the present invention may be located on a user's phone.
  • FIG. 16B This depicts a sample first screen that shows when the app is activated, for the user to begin using the app.
  • FIG. 16C This is where the user logs in as a new user or an existing user, such as with their cell phone number or username.
  • FIG. 16D The new user logs in with their cell phone number.
  • FIG. 16E The new user receives an SMS text message with a verification code.
  • FIG. 16F The new user enters the verification code.
  • FIG. 16G The new user enters their first name, last name, and a username, as an example.
  • FIG. 16H The user sets up their iris recognition, facial recognition, fingerprint pattern, voice recognition, and creates a pattern swipe.
  • FIG. 16I This is where a user is notified that they should set up all of their security features if they have not done so.
  • FIG. 16J This is where the user returns and uses pattern swipe to enter the app/system.
  • FIG. 16K This is where the user returns and uses iris recognition to enter the present invention.
  • FIG. 16L This is where the user returns and uses facial recognition to enter the present invention.
  • FIG. 16M This is where the user returns and uses voice recognition to enter the present invention.
  • FIG. 16N This is where the user returns and uses fingerprint recognition to enter the present invention.
  • FIG. 16O This is the location where files are stored once a user creates and names each folder
  • FIG. 16P This is the location where files are located once they are uploaded to the present invention.
  • FIG. 16Q This is the homepage where Settings, Notifications, About, Security, Support, Storage, and Login using a desktop are located.
  • FIG. 16R This is where the user's shared documents and files are located.
  • FIG. 16S This is where recently uploaded documents are located.
  • FIG. 16T This is where information about each file is shown, including, for example, the file type, size, owner, security, sharing, location, and who created the file.
  • FIG. 16U This is where the user can push the plus button and scan a document, upload a file, or create a folder
  • FIG. 16V This is just some sample text that may be displayed.
  • FIG. 16W This is where the user can do a “Quick Share” and choose Level 1, Level 2, Level 3, or Level 4 for security. The user can also choose View only, Notifications, or offline mode for the recipient of the document to use before the document opens.
  • FIG. 16X This is where the user can delete, rename, move, or star (tag as important) the file.
  • FIG. 16Y This is where the document can be downloaded or not.
  • FIG. 16Z This refers to the offline mode where the user can choose a time limit on the offline mode.
  • FIG. 16AA This is where the user decides who to share the documents with.
  • FIG. 16BB This shows the details of the document such as size, owner, file type, who created the document, location, security, and sharing of the document.
  • FIG. 16CC This is where the user selects the level of security for each document sent first before a document will open for the receiver of the document. This is an example of how security levels may be used.
  • Level 1 One biometric, push notification
  • FIG. 16DD This is where the user may officially select the biometrics that will be used by the receiver to open the document.
  • FIG. 16EE Sharing a file:
  • FIG. 16FF Select a biometric for the receiver to use before the file opens to the receiver.
  • FIG. 16GG This is where the user clicks share or cancel on the file to be sent.
  • FIG. 16HH This is where the user selects the time limit that a recipient can have access to the file that the user sends to them.
  • FIG. 16II This is where the user selects the WiFi IP addresses for the recipient to use in order to access the document that the user sends to the recipient.
  • FIG. 16JJ This is where the user selects the location of where the recipient can access the document that was sent.
  • FIG. 16KK This is where the user selects if the receiver is allowed to View, Download or Share the document.
  • FIG. 16LL This is where the user can rename a file.
  • FIG. 16MM This is where the user can decide to delete a file
  • FIG. 16NN This is where the user can move files into any folder that he or she decides.
  • FIG. 16OO This is where the user may add their name, picture, and location.
  • FIG. 16PP Notifications: This is where the user may specify to be notified when someone adds a file, or shares a file.
  • FIG. 16QQ This is the About Us section that may tell the users about the app/system.
  • FIG. 16RR This is where the user may use a QR Code from a website or any location where it's located, to use the present invention as a Web App.
  • FIG. 16SS This is where the user may add their picture, name, location, phone number and username.
  • FIG. 16TT This is where the present invention may show the security that the user set up to access the present invention, as well as safe WiFi IP addresses and safe locations.
  • FIG. 16UU This is the notifications section when someone shares a file, or adds a file.
  • FIG. 16VV This is the screen that shows when the authorized user has left the camera's view, and is away from the device.
  • FIG. 16WW When an unauthorized viewer looks at the user's screen, the facial recognition recognizes the unauthorized user, and then the screen shows a warning sign indicating that multiple viewers looking at the user's screen.
  • FIG. 16XX This screen is for the introduction of setting up fingerprint recognition.
  • FIG. 16YY This is the screen where the authorized user sets up his or her fingerprint recognition.
  • FIG. 16ZZ This is the screen where it shows that the authorized user has successfully set up fingerprint recognition.
  • FIG. 16 AAA This is the screen for the introduction on setting up iris recognition.
  • FIG. 16 BBB This is where the user sets up the iris recognition.
  • FIG. 16 CCC This is where it shows that iris recognition is successfully set up.
  • FIG. 16 DDD This is the introduction to setting up facial recognition.
  • FIG. 16 EEE This is where the user sets up the facial recognition.
  • FIG. 16 FFF This is where it shows that you successfully set up facial recognition.
  • FIG. 16 GGG This is the introduction of setting up swipe pattern.
  • FIG. 16 HHH This is where the user has set up the swipe pattern.
  • FIG. 16 III This is where the swipe pattern is confirmed.
  • FIG. 16 JJJ This is the introduction for setting up voice recognition.
  • FIG. 16 KKK This is where the user adds their voice recognition.
  • FIG. 16 LLL This is where the user has successfully set up the voice recognition.
  • the app of the present invention may have an admin panel, for example for business clients.
  • the admin panel may include a variety of features that will help security professionals who manage the company's documents have oversight of documents and files.
  • the Admin may optionally receive notifications, as described below:
  • Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system.
  • Suitable development platforms may be used to implement the various features of the present invention, whether implemented on a server, on the client side (e.g., as an app on a mobile device), or the like. Those skilled in the art will be familiar with such development platforms.
  • features of the present invention may be implemented in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays, or the like.
  • ASICs application-specific integrated circuits
  • gate arrays or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention allows for the creation of a biometrically secure environment that allows viewing, editing and sharing of confidential documents, or the like, in public places, without worrying that someone will see the contents. The invention provides privacy, for example for the purposes of reading documents, in a public environment while having the confidence that you are the only one able to read the document. Privacy may be achieved through methods of identification using biometric features, such as: face, iris or voice recognition. Verification that a real person is viewing the document may also be achieved by pulse recognition. In one embodiment, the screen will shut down when more than one person looks directly at the screen.

Description

CROSS-REFERENCE TO RELATED PATENT APPLICATION
The present patent application claims priority to U.S. Provisional Patent Application No. 62/720,543, filed Aug. 21, 2018, and entitled “System and Method for Securely Viewing and Editing Documents and Other Information”, the disclosure of which is incorporated herein by reference thereto.
BACKGROUND AND SUMMARY OF THE INVENTION
The present invention allows for the creation of a biometrically secure environment that allows viewing, editing and sharing of confidential documents, or the like, in public places, without worrying that someone will see the contents.
The invention provides privacy, for example for the purposes of reading documents, in a public environment while having the confidence that you are the only one able to read the document. Privacy may be achieved through methods of identification using biometric features, such as: face, iris or voice recognition. Verification that a real person is viewing the document may also be achieved by pulse recognition. In one embodiment, the screen will shut down when more than one person looks directly at the screen.
A second layer of security in the form of liveness checks may also be provided. For example, this may be accomplished using pulse detection, in one embodiment.
In one embodiment, the present invention may be implemented as a hardware product, a software product, or a combination thereof.
As will be explained in further detail below, some of the features of the present invention may include:
    • Primary person identification—This is a “Log in” to a secure environment.
    • Liveness—Help verify that a real person is trying to “Log in”.
    • Continued identification—Check that the authorized person is still reading the document.
    • Backup identification—Additional method to check that the authorized person is still reading the document; this may be needed if the first method fails.
    • Confidentiality—Verify that only authorized people are looking at the screen
    • Pause security—Give access people, who are not authorized, to read the confidential document.
    • Several secure profiles—to control the security level of the documents, for example in a public place a person may use the most secure setting and at the home least secure.
    • Identify secure profile according to location/Wi-Fi network—it can help understand which secure profile needs to executed.
    • Behavior recognition—help identify people based on typing text or based on how a person answers the phone.
In various embodiments, the present invention may be implemented on a variety of platforms, such as mobile device, tablet, laptop, desktop, etc., using a camera, microphone, etc.
For using some applications and systems, in the past a user will often need to perform a log-in operation, using a user ID and password that can identify the user. However, a password leaves the user with a very low level of protection. Most users aren't fully aware of the dangers of cyber security these days and use passwords that are easily hackable. The use of more elaborate passwords results in forgetfulness and can, in some cases, lead to keeping a log of all passwords, which obviously contradicts the entire purpose of a complex password.
In contrast, biometric Identification gives rise to a solution serving both purposes. The use of biometric technologies makes sure that the user will never forget a password again and simultaneously provides a very high level of security.
Biometric identification methods include, amongst others, face recognition (identification based on an image or video of a subject's face), iris recognition (identification based on an image or video of a subject's iris), fingerprint (identification based on a subject's finger print), voice recognition (identification based on a voice sample of the subject) or combination of any of the above methods, or other methods.
Hacking a biometric system is not simple at all, and yet it is possible. Many recent documented cases have publicized, such as the fooling of the Samsung Galaxy S8 iris scanner back in May 2017. The iris scanner was fooled using a camera, a laser printer and a contact lens. In September 2017, the researchers were able to bypass Apple's FaceID with a 3D printed mask of the expert's face, made of stone powder. The total cost of the materials used was approximately $150.
A solution for avoiding such hacks to is, as taught according to the present invention, the use of “liveness checks.” A liveness check allows a biometric application to discriminate between the real biometric factor of a subject and artificial copies of those features making up the biometric factor. Liveness detection reduces the likelihood that spoofing attempts will succeed, and as such reduces the false acceptance rate. An example of a liveness check is facial recognition software requiring the subject to blink with his or her eyes, and smile or give a head nod. However, even these gestures are also quite easily faked using a mask with holes in it.
As will be described further below, these limitations may be overcome, such as by using pulse detection using a video that can be obtained using a dual camera array.
A secure environment must be protected at all times because identifying a person in the beginning of a document viewing/editing is not enough. A solution according to the present invention is that during the viewing/editing of the document, the identification process must run in the background and yet provide the highest possible level of security by performing an identification screening every X seconds. In addition a backup identification is offered as a precaution in the event that the continuous identification fails.
For this method to be seamless and yet achieve the highest levels of performance, a weighing method is devised, comparing the validity of verification from the different methods.
A secure environment should provide complete confidentiality, which means that it is necessary to determine that only one person looks at a device in a given time. In a case where the system detects a breach, by detecting another face looking at the camera, meaning that another person is attempting to violate the privacy of the prime user, the system may provide an alert or at certain levels of confidence, even shut off the screen.
Behavioral profiling may be used in order to give the user the best secure environment, while not harming the convenience of use and improving the user experience. Behavioral profiling is a definition of a person by his habits, location, WIFI to which he is often connected, often visited sites, matter of text input, the way in which a person holds his phone, etc. It is best to secure documents by using a behavioral profiling score for defining security levels that need to be applied. For example, by using GPS coordinates or a WI-FI network to define a score that gives lower score if a user is in a public place, the security level needs to be set on a higher level. A medium score would mean a person is in a work place, and the security would be at a medium level. A high score corresponds to, for example, the user being at their home, for which will result in a lower level of security.
In one embodiment, behavioral profiling scores can be calculated according to a sensor that appears in a device. For example, in a smartphone/tablet a location sensor, gyro sensor, Wi-Fi/mobile module may be provided that can provide relevant information.
For laptops or desktop, location may, for example, be obtained according to IP address, the browser history may be checked, and many more components can be learned from different sensors and modules.
Behavioral profiling can be used in addition to existing biometric methods but, in one embodiment, cannot supply a secure environment if biometric methods do not exist.
BRIEF DESCRIPTION OF THE FIGURES
The accompanying figures depict the structure and operation of various aspects of the present invention, in one embodiment.
FIG. 1 is an overview of the enrollment and verification/recognition process of the present invention.
FIG. 2 is a typical facial biometric flow of the present invention.
FIG. 3 depicts common fingerprint patterns.
FIG. 4 depicts fingerprint classification features.
FIG. 5 depicts a typical fingerprint recognition process.
FIG. 6 depicts a typical flow of speaker biometrics.
FIG. 7 depicts an eye and its various parts.
FIG. 8 depicts a typical flow of an iris recognition process.
FIG. 9 depicts a typical smartphone with iris scanning capabilities.
FIG. 10 depicts a first process that may be performed by the present invention.
FIG. 11 depicts a second process that may be performed by the present invention.
FIG. 12 depicts a third process that may be performed by the present invention.
FIG. 13 depicts a typical flow of an enrollment process of the present invention.
FIG. 14 depicts a typical flow of the secure phase of the present invention.
FIG. 15 depicts a typical architecture of the present invention.
FIGS. 16A-16LLL depict various typical screenshots of the present invention, with corresponding function.
DETAILED DESCRIPTION OF INVENTION
Biometrics
Biometric technologies are the science of detecting and recognizing human characteristics using technologies that measure and analyze individual biological data. The way we are genetically composed creates identifiable traits that can uniquely represent us as individuals. DNA can be used to distinguish between two individuals apart for identical twins. Some biometric traits such as fingerprints and iris prints are distinctive even among identical twins.
Current antiquated mechanisms such as keys, passes, tokens, passwords, PIN's and secret questions are easily stolen and shared. However, biometrics is the method of identifying a person based on their distinctive physiological or behavioral characteristics and these attributes cannot be shared, misplaced or forgotten. Into the future, it is becoming increasingly important to have confidence in secure authentication of electronically stored information.
FIG. 1 provides an overview of a typical biometric enrollment 100 and verification and recognition flow 110.
In enrollment step 101, a user's biometric information is presented, captured in step 102, processed in step 103, and stored in a database 104. In the verification/recognition steps, biometric information is presented 111 and captured (step 112), processed in step 113, and in step 114 the biometric data processed in step 113 is compared to the enrollment biometric data stored in the database 104. The result is either a match (115) or no match (116).
There are different types of biometric processes and techniques that may be used, including: for example: facial biometrics, finger print recognition, speaker biometrics, liveness checks, iris recognition, etc.
1. Facial Biometrics
The face is an important part of who you are and how people identify you. Except in the case of identical twins, the face is arguably a person's most unique physical characteristics. While humans have the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
A typical facial biometric flow is depicted in FIG. 2, and described further below.
Face detection is the identification of human faces in digital images. In essence, it is the task of defining bonding boxes that surround human faces. Face detection is a crucial pre-processing step in any face verification system. Detecting faces in the image allows the extraction of faces from the image and performs an analysis on each face alone and removes it from the background, allowing the system to perform the analysis on each face in the image separately.
In a typical embodiment of face detection, with reference to FIG. 2, in step 201, an image is captured, in step 202, a face is detected (as described in further detail, below), in step 203, face alignment is performed, and feature extraction is performed in step 204. In step 205, an image of a new face may be matched to the features of faces stored in database 206, and in step 207 a decision is made as to whether the new face matches a particular stored face.
For purposes of the present invention, face detection may be the tool used to determine whether only a single person is currently viewing the screen (a single face is located in the frame captured by the camera) or multiple people.
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks is a face detection algorithm that utilizes Multi-Task Cascaded Convolutional Networks (MTCCN). MTCNN makes use of the different levels of the image's Gaussian pyramid to create a bounding box and extract facial land marks of the face. The procedure may be performed using a 3 stage deep convolutional network. Each stage outputs a more refined and more accurate depiction of the face's location in the image.
YOLO: Real-Time Object Detection is a concept recently introduced in the field of real time object detection. YOLO divides a given image into a square grid, predicts bounding boxes for each grid cell and predicts a class probability for it. As a final step, the bounding boxes are merged and a final class prediction is given. YOLO is known to work in real time rate even when running on relatively simple devices. Seeing as the face viewed may be an object in a digital image, the use of an advanced, state of the art detection system would be a suitable choice.
Face alignment is the task of deforming two facial landmarks such that both will eventually have the same orientation. The task is vital in order to compensate for the high variability of poses in which the face may be captured. It allows for face verification to be feasible without having the user enroll in all possible poses to the camera. The procedure usually involves the use of facial landmarks and creates the best possible correspondence between these landmarks.
One Millisecond Face Alignment with an Ensemble of Regression Trees. This technique is when the framework learns face landmarks in preparation for the actual face alignment, thus allowing for an initial extraction of facial landmarks from the image and aligning the image according to said landmarks. This a priori step allows for real time performance. The alignment process itself may use a cascaded regression pipeline.
Deep Alignment Network: A convolutional neural network for robust face alignment Deep Alignment Network (DAN) is deep neural network that includes several stages. Each single stage of DAN includes a feed-forward neural network which performs landmark location estimation and connection layers that generate the input for the next stage. DAN moves between all stages 3 inputs, input image which has been warped so that the current landmark estimates are aligned with the canonical shape, a landmark heatmap and a feature image. The advantage of DAN is that it extracts features from the entire face image rather than the patches around landmark locations. An additional advantage is the use of a landmark heatmap that is an image with high intensity values around landmark locations where intensity decreases with the distance from the nearest landmark.
Feature extraction is the stage in which a biometric vector is extracted from the image to later be used by the matcher to determine whether two facial images are identical.
FaceNet: A Unified Embedding for Face Recognition and Clustering. FaceNet is a system that directly learns mappings from image faces to a compact Euclidean space where distances directly correspond to a measure of face similarity. The framework goes through a learning procedure allowing it to extract features capable of differentiating different facial images. This framework is based on the use of positive and negative examples of facial images. For each user, an anchor is determined. The framework attempts to bring positive (images of the same individual) “closer” in the formed feature space and negative (images of different individuals) “further away”. Said framework allows to determine if two facial images came from the same source or from different sources.
Video-Based Face Recognition Using Ensemble of Haar-Like Deep Convolutional Neural Networks. This framework treats the extraction stage of the features as a two stage operation. At the first stage, a neural network extracts features from the facial image. At the second step, three networks embed asymmetrical and complex facial features. This framework extracts facial embedding that can be fed into a matcher deciding if two faces came from the same source or from different sources.
Feature matching is the matching of feature vectors extracted from two different images. The matching may be performed in various different ways, including but not limited to:
    • Euclidean distance—calculation of the Euclidean distance between two feature vectors and making a decision based on the distance between the two.
    • Feature-Based Block Matching Algorithm—match face features by integral projection technique.
    • Neural network—use a classifier to match between features.
Some advantages of Face Recognition include:
    • All hardware requirements are comfortably located on any smartphone/tablet/laptop/desktop, etc. these days.
    • As in the case of many biometric methods, makes remembering passwords completely redundant.
    • This system has minimal requirement from the user. The use of state of the art detection and alignment algorithms has brought upon a new age in which a user can be identified in pretty much any pose and in most illumination conditions.
Disadvantages of Face Recognition:
    • Very simple manipulations may be performed to fake a person's face. However, solutions to prevent this from happening are known.
    • Face verification requires a minimal amount of illumination. In absence of such illumination, the method may fail.
2. Fingerprint Recognition
A fingerprint in its narrow sense is an impression left by the friction ridges of human finger. The fingerprint pattern is permanent and unchangeable. The probability that fingerprints of two individuals being alike is approximately 1 in quadrillion.
FIG. 3 depicts common fingerprint patterns.
Most automatic systems for fingerprint matching are based on minutiae matching. Minutiae classification is based on:
    • Crossover: two ridges cross each other
    • Core: center
    • Bifurcation: ridge separates
    • Ridge ending: end point
    • Island: small ridge b/w 2 spaces
    • Delta: space between ridges
    • Pore: human pore
FIG. 4 depicts fingerprint classification features, such as Crossover 401, Core 402, Bifurcation 403, Ridge Ending 404, Island 405, Delta 406 and Pore 407.
FIG. 5 depicts a typical fingerprint recognition flow, described in further detail thereafter.
In step 501, a fingerprint is captured, in 502 pre-processing occurs, features are extracted in step 503, and fingerprint matching is performed in step 504, based on stored fingerprints in database 505.
Two main technologies may be used to capture image of the fingerprint
    • Optical—use light refracted through a prism
    • Capacitive-based—detect voltage changes in skin between ridges and valleys
Pre-process is the method that prepares the image to facilitate further work with the image. The pre-process can include the enhancement of the image, the binarization of the image, finding region of interest (ROI), thinning of the fingerprint image, detecting core point, minutia extraction.
Thinning of the fingerprint image: Generally, the gray values of pixels of ridges in the fingerprint image gradually decrease, going from an edge towards the center of the ridge line, then, increase again going towards the other edge. This represents the definition of a local minimum. The idea is to capture this local minimum line to convert the ridge of (e.g. 5) pixels wide into one pixel wide.
Core point detection: The core (or singular) point of a fingerprint is defined as “the point of the maximum curvature on the convex ridge”, which is usually located in the central area of fingerprint. The reliable detection of the position of a reference point can be accomplished by detecting the maximum curvature.
Minutiae extraction: Most fingerprint minutia extraction methods are thinning based where the skeletonization process converts each ridge to one pixel wide. Minutia points are detected by locating the end points and bifurcation points on the thinned ridge skeleton based on the number of neighboring pixels. The end points are selected if they have a single neighbor and the bifurcation points are selected if they have more than two neighbors.
Feature is a piece of information that describes a specific part in input image. In the feature extraction an extraction of the features is performed that create unique ID code for each of the fingerprint. Extracted features are used in the final matching feature stage to perform fingerprint recognition.
The features of the fingerprint will be represented by the number of minutiae of each type within a specific distance from the core point. This is achieved by dividing the fingerprint image into concentric tracks around the core point.
Fingerprint matching is the process used to determine whether two sets of fingerprint feature come from the same finger. One fingerprint feature is stored into the database and other is computed from acquisition image of fingerprint.
The matching may be performed in various different ways, including but not limited to:
    • Euclidean distance—calculation of the Euclidean distance between two feature vectors and making a decision based on the distance between the two.
    • Use deep structures, for example Fingerprint Pattern Classification Using Convolution Neural Network or to High-Resolution Mobile Fingerprint Matching via Deep Joint KNN-Triplet Embedding.
Advantages of using fingerprint recognition include that its very fast technology, and the probability that fingerprints of two individuals being alike is approximately 1 in quadrillion. The disadvantages include that you must have a fingerprint scanner, and acidity can change a fingerprint.
3. Speaker Biometrics
Speaker recognition is the identification of a person from characteristics of voices. It is also called voice recognition. There is a difference between speaker recognition (recognizing who is speaking) and speech recognition (recognizing what is being said). These two terms are frequently confused, and “voice recognition” can be used for both.
Speaker verification may be performed for text dependent (unique pass phrase for a user) or for text independent (the user is verified based on voice print alone, independently of what is said). This section will focus solely on methods for text independent methods as we see these methods as the future of the field. Moreover, using text independent recognition, one may form a second layer of pass-phrase matching to convert it into a text-dependent method.
Voice Activity Detection is the process in which voiced segments are extracted out of the entire speech signal. Speaker analysis should preferably be performed only on segments recognized as speech as silent segments are shared amongst all speech signals.
Most speaker verification techniques are based on features called Mel-Frequency-Cepstral-Coefficients (MFCC). MFCC is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The MFCC features are extracted directly from the audio signal after several pre-processing steps that may vary as a result of different classification algorithms.
FIG. 6 depicts the typical flow of speaker biometrics, described in further detail hereafter.
In one embodiment, two overall processes may be used for speaker biometrics—enrollment 600 and verification/recognition 610.
In step 601 of the enrollment process, the speech biometrics of a user to be enrolled may be presented. Features of the speech may be extracted in step 602, and a model may be trained (as will be described in further detail below) in step 603. A voiceprint is created and stored in step 604.
In the verification/recognition process 610, a person's speech biometrics may be presented in step 611, features are extracted in step 612, and in step 613, the extracted features are compared to the voiceprints (stored in step 604), resulting in a decision—match (614) or no match (615).
Again, most speaker verification techniques are based on features called Mel-Frequency-Cepstral-Coefficients (MFCC). MFCC is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The MFCC features are extracted directly from the audio signal after several pre-processing steps that may vary as a result of different classification algorithms.
More recent approaches make use of various deep learning algorithms. The learning portion in these methods attempts to find the best possible features for discriminating between different speakers. These algorithms provide an “extractor” to be used in extracting features from new, unseen utters.
In one embodiment of a speaker verification system, the system is built around the likelihood ratio test for verification, using simple but effective GMMs for likelihood functions, a universal background model (UBM) for alternative speaker representation, and a form of Bayesian adaptation to derive speaker models from the UBM. This method requires an initial learning step in which a UBM is created. This model is meant to capture the total variance of all possible speakers. The result is a model of the distribution of all background models.
At the verification phase, an utter is tested for the probability of it belonging to a set of speaker features or of it belonging to the UBM, and a decision is made based on the likelihood ratio. More recent approaches use SVM (support vector machines) and deep networks for binary classification for the task of verification.
4. Liveness Checks
Liveness check is a method that comes to verify that a real person is trying to perform biometric recognition to enter the device. A liveness check method is used in addition to some biometric recognition method (like face recognition, Iris recognition, voice recognition). Liveness detection reduces the likelihood of spoofing attempts to succeed, and as such reduces the false acceptance rate. Examples of liveness checks are described below.
Pulse recognition—Pulse may be extracted from video, for example using known techniques.
Advantages of pulse recognition include:
    • Need short time to detect pulse.
    • Can work on small devices such smartphone
    • Helps stop spoofing attacks.
Disadvantages of pulse recognition include:
    • Usually must be used with additional methods (like face recognition).
    • Device typically must have 2 cameras.
    • Reduces user experience, because of the need to add instructions.
    • Can reduce performance of recognition system, because the liveness check requires computing power.
Blinking—a person may be asked to blink and then a camera may be used to recognize when a person is blinking.
Advantages of blinking recognition include:
    • Easy to verify that a real person is trying to enter the device
    • Can be performed on all devices with a camera
    • Helps stop spoofing attacks.
Disadvantage of blinking recognition include:
    • Can only be used with additional methods (like face recognition).
    • Reduces user experience, because of the need to add instructions.
    • Can reduce performance of recognition system, because the liveness check requires computing power.
Voice recognition—ask a person to say one of random sequences, that appears in data base, and then match recorded voice pattern with voice pattern that appears in data base.
Advantages of voice recognition include:
    • Helps stop spoofing attacks.
    • Can be performed on all devices with microphone.
Disadvantages of voice recognition include:
    • Must typically be used with additional methods (like face recognition).
    • Reduces user experience, because of the need to add instructions.
    • Can reduce the performance of a recognition system, because the liveness check requires computing power.
5. Iris Recognition
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. It is perforated close to its center by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering the eye by controlling the diameter and size of the pupil. The eye color is defined by that of the iris.
FIG. 7 depicts an eye and its various parts—for example, an iris 701, pupil 702 and sclera 703.
FIG. 8 depicts a typical flow of an iris recognition process, described in further detail hereafter.
In the enrollment stage 800, image acquisition 801 deals with capturing a sequence of iris images from the subject using cameras and is desirable with sensors that have high resolution and good sharpness. Today it is possible to do so using the front smartphone camera or a modern camera, as examples.
To perform iris recognition the minimum size of iris needs to be at least 70 pixels, in one embodiment. To achieve iris size of at least 70 pixels, the maximum distance that the phone can be placed from the eyes is around 50 centimeters. The iris needs to be clearly visible in the image, in one embodiment.
FIG. 9 depicts a smartphone 901 (in this case made by Samsung), with iris scanning capabilities.
Referring back to FIG. 8, after an optional pre-processing step (802), iris segmentation 803 is used to locate the valid part of the iris for iris biometrics, including finding the pupillary and limbic boundaries of the iris, localizing its upper and lower eyelids if they occlude, and detecting and excluding any superimposed occlusions of eyelashes, shadows, or reflections. Iris segmentation can greatly affect the iris features that are needed for iris recognition.
Iris segmentation may be performed in various ways, as described in further detail below.
Daughman's algorithm method for iris recognition-a biometric approach. The iris region can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. To find two circles to be used for approximating the iris region, a combination of circular and linear Hough transforms can be used. The use of the two transforms yields a valid approximation of the iris in the image.
Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. This method uses a two-stage iris segmentation method based on convolutional neural networks (CNN), which is capable of robustly finding the true iris boundary in intense cases with limited user cooperation. The first stage is pre prospecting which includes bottom-hat filtering, noise removal, canny edge detector, contrast enhancement, and modified Hough Transform to segment the approximate the iris boundary. The second stage is a deep CNN that is used as input image with fixed size and then fits the true iris boundary. This second stage applies only on region of interest (ROI) that is defined by the approximate iris boundary detected in the first stage. The choice to apply the second stage only on ROI reduces the processing time and error of iris segmentation. The result of the segmentation stage can be given to iris feature extraction for future matching.
After detecting the bounded iris in the segmentation stage (803), a normalization stage 804 may be performed, in which the bounded iris is converted to an accepted iris template. This is needed, in one embodiment, for future matching 807 between the iris template and iris templates from database 806. Typically, the process is a conversion from Cartesian to non-concentric polar representation of the iris template.
Daugman's Rubber sheet Model by Daugman. Daugman's rubber sheet model ensures the proper handling of the matter due to the specifics of the iris. This model converts the iris area from Cartesian representation to polar representation that maps each pixel in the iris area into a pair of polar coordinates (r, θ), where r and θ are on the intervals of [0 1] and [0 2π] respectively. It accounts for size inconsistencies and pupil dilation of the iris area, but does not compensate for rotational inconsistencies between templates. The output of this stage is an iris template with polar representation that is consistent with template sizes in the data base.
Image Registration modified by Wildes et a. Wildes has proposed an image registration technique for normalizing iris textures. A newly acquired image would be aligned with an image in the database, and a comparison will be performed. The alignment process is a transformation using a choice of mapping function. The alignment process compensates the rotation and scale variations. It must be noted that the normalization is performed in the matching time.
Feature is a piece of information that describes a specific part in an input image. In the feature extraction stage 805, an extraction of the features is performed, that creates a unique ID code for each of the iris normalized representation. Extracted features are used in a final matching feature stage 807 to perform iris recognition (with a result of match 808 or no match 809).
To use the iris recognition of FIG. 8, the verification/recognition stage 810 may be performed. First, image acquisition 811 is performed, as well as an optional pre-processing 812 step. Iris segmentation 813, image normalization 814 and feature extraction 815 are then performed. Finally, the matching decision 807 is performed, based on the database of 806.
Gabor Filters: To extract features from an iris pattern, in polar representation, may use a demodulation process. Local regions of an iris are projected onto quadrature 2-D Gabor wavelets, generating complex valued coefficients whose real and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each phasor is quantized to one of the four quadrants, setting two bits of phase information. This process is repeated all across the iris with many wavelet sizes, frequencies, and orientations to extract vector of 2048 phase bits (256 bytes) are computed for each iris. In one embodiment, this method is used only to phase information for recognizing irises because amplitude information is not very discriminating.
Iris Recognition with Off-the-Shelf CNN Features: A Deep Learning Perspective. To train a new deep network, a large amount of data is needed. These databases can be paid or not at all, that is, they have not yet been created or they are still small. Therefore, this method suggests using one of the best known deep networks, such as AlexNet, VGG, GoogLeNet and Inception, ResNet or DenseNet. These deep networks are already trained on huge databases with a large number of classes. These deep networks are designed to recognize visual patterns directly from pixel images with minimal preprocessing. To achieve perfect performance that is close to human's recognition performance, these deep networks extract unique features that help to later recognize a detected object and classify this object to the classes with the similar features. So, this method suggests using the aforementioned deep networks prior to the stage of classification, namely until the stage of feature extraction. And then this feature of Iris recognition may be used for matching with features in database.
A multi biometric iris recognition system based on a deep learning approach. This method is a real-time multimodal biometric system that is called IrisConvNet. Its architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input images of both the right and left irises of a person without any domain knowledge. The input image represents the localized iris region and then classifies it into one of N classes by fusing the results obtained using a ranking-level fusion method.
Feature matching means finding corresponding features from two similar feature vectors based on a search distance. In one embodiment, one of the feature vectors is named source and the other target. The feature matching process analyzes the similarity of source and target by using a suitable method. The accuracy of feature matching depends on data similarity, complexity, and quality. In general, the more similar the two datasets, the better matching results.
Hamming distance employed by Daugman. The Hamming Distance uses XOR to measure how many bits disagreed between two templates. To get invariance to the rotation when calculating hamming distance of two templates, one template is shifted left and right bit-wise and a number of hamming distance values are calculated from successive shifts. The actual number of shifts required to normalize invariance to the rotation will be determined by the maximum angle difference between two images of the same eye. One shift is defined as one shift to the left, followed by one shift to the right.
Deep neural networks for iris recognition system based on video: stacked sparse auto encoders (SSAE) and bi-propagation neural network models. For iris feature matching this method explains two different algorithms: first is Stacked Sparse Auto Encoders (SSAE) Deep Neural Network model and second is Bi-propagation Deep Neural Network, described further below
    • Auto-encoder is an uncensored feature learning system that intends to enhance the feature depiction of inputting high-dimensional data via identifying the relationship amongst data. Training of the Auto encoder is to detect optimum parameters via decreasing the inconsistency between input and its rebuilding. The architecture of basic Sparse Auto encoder (SAE) is the Auto-encoder and cost function to labeled inconsistency between input and its rebuilding that is in Auto encoder. In this method SSAE contain two plains of sparse Auto Encoders (SAE).
    • The original idea of the Bi-propagation algorithm is that the hidden layers multilayer perceptron (MLP) obtain the desired values performed with back-propagation algorithm (Slow algorithm), where perceptron is an algorithm for supervised learning of binary classifiers. Bi-propagation algorithm in this method is N-layer perceptron and is divided into N single-layer perceptrons and with that the complex problem of learning is divided into several simpler problems, independent of each other and corrections of weights synapses during learning spread in both directions.
Advantages of iris recognition include:
    • Requires a simple smartphone/tablet/laptop camera.
    • Living password cannot be forgotten or lost.
    • Unique patterns that is hard to fake.
    • Stable with age.
    • High accuracy of recognition.
    • Easy user experience, because while a person is looking at the screen, the camera would normally be able to detect the iris.
Disadvantages of iris recognition include:
    • Hard to recognize person with low light
    • Iris may be partially blocked by eyelids which is difficult to control by individuals due to frequent blinking. But partial occlusion of the iris is normal (as no one is constantly attempting to keep their eyes wide open) but this occlusion may cause trouble when attempting to perform verification.
      Applying Biometrics to the Present Invention
The present invention may allow for various functions and embodiments, such as the following three functions, each of which can be implemented in a variety of ways.
Informally, we can refer to the three overall functions as:
    • Function 1—“Keep Your eyes off my screen”
    • Function 2—“Keep your eyes off my document”
    • Function 3—“Special glasses for viewing the screen”
FIG. 10 depicts Function 1—“Keep your eyes off my screen”, in one embodiment. In step 1001, an authorized user of a device (smartphone, laptop, desktop, tablet, etc.) looks at the screen of the device, as usual. In step 1002, the present invention is continuously operating (using biometric technology) to ensure that only the authorized user is viewing the screen of the device. While iris scanning technology is depicted in FIG. 10, any other biometric technology, as described previously, may be used for this purpose, in other embodiments. In step 1003, when an unauthorized user is detected, the screen of the device is turned off, turned black, or otherwise deactivated. In step 1004, the system continues to scan to verify whether only the authorized user is in view. In step 1005, if only the authorized user is detected, the screen is turned back on.
FIG. 11 depicts Function 2—“Keep your eyes off my document”. In step 1101, the user sets the security level for a particular document. In step 1102, the device operated by the recipient of the document scans (such as by using the camera on the device) biometric characteristics of the user of the device (such as by using iris detection, as shown in FIG. 11, but any other biometric detection may be used). In step 1103, once the device recognizes the recipient using the biometric scan, the document opens for the recipient. In step 1104, a co-worker, stranger, etc. approaches the recipient, and tries to view the document on the device, as well. In step 1105, the device detects the stranger, co-worker, etc., and closes the document (or blacks out the screen) immediately. In step 1106, once the stranger, co-worker, etc. is no longer in the sight of the device, the document will re-open for the authorized recipient to view, edit, etc.
To detect whether an authorized user is attempting to view the screen (FIG. 10) or a particular document (FIG. 11), the invention may use a tracking process. When a new face is detected in the camera's frame, the invention checks and tracks the new face to determine if it is a registered user of the present invention. If a person appears in the camera's frame whose face isn't recognized as a registered user of the present invention, the screen will shut down, in one embodiment. In other words, because there was a person(s) in the camera's frame whose face(s) wasn't registered, the invention doesn't recognize the match, and may shut down the screen.
FIG. 12 depicts Function 3—“Special glasses for viewing the screen”. In step 1201, the owner of the device puts on special glasses. In step 1202, the screen on the device will look scrambled to people who are not wearing the special glasses, which have been designed specifically for that user. In step 1203, assuming the authorized user is wearing the special glasses designed for her/him, that user will be able to securely view the screen without any problems.
FIG. 13 depicts an example flow of an enrollment process according to the present invention, in one embodiment.
FIG. 13 will be described in further detail, below, according to one embodiment.
To use an application (1300), in one embodiment a user may Log-In to an existing account (1301) or register a new one (1302). If the user Logs-In to the application, the Cloud loads all necessary data for user identification (1303). If the user chooses to create an account, he or she may go through the enrollment phase beginning at step 1304.
Enrollment phase 1305: Application asks user to define secure profiles:
    • User may register face for face recognition
    • User may register iris for iris recognition
    • User may register finger for fingerprint recognition
    • User may register speech for voice recognition
    • If a device has a biometric password, then the application can't use it in one embodiment, for more security.
At step 1306, user may choose secure profile mode (automatic 1307 or manual 1308).
    • If selection is automatic:
      • User may define work (1309)/home (1310)/secure location (1311)
      • User can register a secure WI-FI network (1312)
      • In one embodiment, if can't define location (WI-FI and GPS is turned off) the application by default will assign a higher level of a secure profile. At step 1313, user can define additional users for access to secure zone. At step 1314, data is saved for identification in Cloud database
Proceeding to secure phase at step 1315:
    • Application asks to create/select folder to define a secure zone
    • User can add additional folders to the secure zone
    • User can add applications (for example eMail) to the secure zone
Different documents typically need different levels of security. For this purpose, secure profiles can be used. They will determine the security level of each document and of all secure environments; for example, in a public place the security level may be the highest, whereas at home it may be the lowest.
Table 1 below provides an example of different secure profile levels which may be used with the present invention. This is just one example—these secure profiles may be implemented in a variety of ways.
TABLE 1
Secure Profile Secure methods Effect on
Secure Level 1 1. Face Recognition Open application
Secure Level 2 1. Face Recognition Open application
2. Iris Recognition Open edit documents
Secure Level 3 1. Face Recognition Open application
2. Iris Recognition Open edit documents
3. Continue identification Share Document
Secure Level 4 1. Face Recognition Open application
2. Iris Recognition Open edit documents
3. Liveness Check Share Document
4. Continue Identification
5. Finger Recognition
Secure Level 5 1. Face Recognition Open application
2. Iris Recognition Open edit documents
3. Liveness Check Share Document
4. Continue Identification
5. Finger Recognition
6. Voice Recognition
FIG. 14 depicts an example of the flow of the secure phase 1315, in one embodiment of the invention.
FIG. 14 will be described in further detail, below.
When a user opens an application (1401) the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1402) or not (1403). When a user tries to open a document (step 1404), the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1405) or not (1406). Similarly, when a user tries to share a file (step 1410), the present invention asks for identification according to which a secure profile is activated, based on whether identification takes place (1411, in which case the file is shared in step 1413) or not (1412). While user reads/edits a document (step 1407), continuous identification (1408) is performed according to which secure profile is activated, maintaining confidentiality (1409).
While user reads/edits a document, the user can pause the secure process by clicking on bottom to show document to a non-registered person. While a user reads/edits a document the system checks that nobody else is looking at the screen for which a secure profile is activated. And if an additional person is looking at the screen, the present invention performs the following steps:
    • If the face is in the camera range less than X seconds, then it does not take any action
    • After X seconds, it shows a warning on the screen that notifies the owner that someone else is looking at his phone and shows an alert on the screen
    • If within a few seconds after the message appears, it does not take any response from the owner's side, then the device screen will shut off
    • Additionally, when a warning appears, the owner will have the option to pause “confidentiality” and “continues identification” for showing the document to the companion
    • Try to identify second person by face recognition; maybe this user has permission to see documents (user defined him to secure zone)
    • If identification succeeds then remove alert, else turn of the screen
When user finishes read/edit of document and closes the application then all secure processes are stopped, to give to the device better performance. In one embodiment, the document may be encoded before sharing, and decode can be performed by using same secure application.
The above-described steps describe the present invention as performed in “online mode”.
For offline mode: User can download documents to the device and continue to work with the document when a network connection (Ethernet, WiFi, etc.) is not available.
However, in one embodiment, the most secure documents can't be downloaded to the device in this situation. Also, in offline mode, and in one embodiment, not all biometric recognition methods will be available in this situation.
After a network connection is available once again, the downloaded documents may be merged with documents in the Cloud, and all secured documents will be erased.
Implementation Details
The present invention provides security of documents during storage, and also security while reading or editing the document. The security of the documents may be made through an application/software. As described previously, at the opening of an application/software the user will have to pass a primary person identification.
Primary person identification may be based on all (or a subset) of the biometric identification technologies described previously, such as face recognition, fingerprint, voice recognition, iris recognition. Moreover, primary person identification may be dependent on secure profile level. For example, a higher secure profile level may be required for identification of a person's identity. In one embodiment, the identification method may be chosen randomly every time when trying log in to the application. To achieve better security, in additional to the primary person identification, a liveness check may be dependent on secure profile level. For a higher secure profile level, more methods of identification of a person's identity may be required.
For a secure environment, a user may be required to register with the application in which the user is required to perform an enrollment phase (as described previously) that includes: enrollment of iris for iris recognition, enrollment of face for face recognition, enrollment of speech for voice recognition and enrollment of finger for fingerprint recognition. The enrollment phase may be necessary if the device already has some biometric signature, given the need to get the most updated biometric signature.
Additional personality recognition may be performed by device verification, such as by:
    • Checking the model of the device which interacts with the Cloud.
    • Check the location of the device which interacts with the Cloud.
    • Check the IP address of the device which interacts with the Cloud.
    • Verify the WIFI network when entering the Cloud.
As described previously, a secure profile associated with each document may be used to determine the security level for the documents, and for all secure environments. User can change existing secure profiles as needed. The user will be able to determine, for each individual document, the level of protection and methods for recognizing the user. For each document, biometric personality recognition may be used, depending on the security level of the document.
During the viewing and editing of the document, continuous identification may be performed that is based, for example, on iris recognition technology to verify that the authorized person is still working with the device. A continuous identification process may run at all times in the background—for example, iris recognition may be performed periodically (such as every 10 seconds, or another suitable interval). Having the iris recognition perform only periodically balances effectiveness and performance. For the iris recognition to perform properly, the person's eyes must be open. If the person blinks then iris detection may fail to detect the person's iris. In this case, backup identification that is based on another biometric parameter, such as face recognition, may be performed to verify that an authorized person is still working with the device.
Additionally, during the viewing and editing of the document, face detection technology may be used to verify that only an authorized person is looking to the device screen. This process may run at all times (or periodically) in the background and search for additional faces that enter into the camera range, and then check for how long this extra face is in the camera range. If the face is in the camera range less than, for example, 2 seconds then no action may be taken, in one embodiment. Otherwise, the present invention may show a warning on the screen that notifies the owner that someone else is looking at his phone or device. If within, for example, a 10 second period after the message appears, no response is received from the owner, then the device screen will automatically turn off. Additionally, when a warning appears, the owner will have the option to pause “confidentiality” and “continuous identification” to show the document to the companion. User will be able to change the delay and related options.
When a second person is detected in the camera's range of review, the present invention may also try to identify this second person using, for example, face recognition. If it turns out that this second user already has permission to see the document (the user defined him in the enrollment phase), then the alert may automatically be removed. Otherwise, the screen may be turned off it there is no response from the owner's side.
Pause Security is an option to allow the pause of the “confidentiality” and “continuous identification” feature—for example, if an authorized user wants to show the document to a companion. The Pause Security option can be enabled when the system detects an additional face in the camera range or when the owner enables it from the settings. In one embodiment, and for additional security, to enable Pause Security the user may need to perform one random recognition.
To ensure full security on all platforms, all documents and all biometric vectors may be stored in the cloud, in one embodiment. This means that in this embodiment the full engine will be based in the cloud to prevent identity theft from the device. But for situations where access to the Internet is not available, an offline mode may be turned on. In order to be able to continue working offline it may be necessary to download a document and biometric vector to a local disk before offline mode will be activated.
In offline mode, the present invention is able to determine which documents can be viewed or edited. In one embodiment, an option may be included whereby especially important documents would not be able be downloaded to the device. This option may exist because offline capabilities will be available for the limited possibilities of biometric identifications and because the device may not have all necessary hardware to support all secure options. In both offline and online modes, the continuation and backup identification function may be available.
After working with documents offline, and thereafter going back online, the downloaded document may be synchronized with the document in the cloud, and after synchronization is complete, documents and biometric vectors may be erased automatically from the local disk.
In one embodiment, if the security level of the downloaded document is 0, then it can be stored on the device for an unlimited time but after X time biometric vector will be erased automatically from the local disk. In one embodiment, user must determine X time before enabling offline mode.
Additional possible features are described below.
The user may be able to create/select a folder and define it as a secure zone for downloaded documents and biometric vector.
The user can add applications (for example eMail/word/pdf) to a secure environment.
Different types of information about the document may be stored in the cloud, for example: the date the document was edited, the name of the last editor, etc.
Document sharing may be possible, but only through the application, which means that the receiving party must also be authorized in the application/software and be able to open the document with its biometric identification.
In addition to the present invention that provides “confidentiality” for documents are available several hardware solutions, such as: a screen protector that narrows the viewing angle, a polarized screen protector whereby the user wears glasses, etc.
In addition to the biometric technologies described herein, behavioral profiling may also be used as a passive method of owner recognition.
Overall Architecture of the Present Invention
The present invention may be implemented on a variety of computing and communications technology platform configurations. In one embodiment, a typical architecture may be as depicted below, with respect to FIG. 15. A device, such as a mobile device (laptop 1501, tablet 1503, smartphone 1504, etc.), a desktop computer 1502, or any other type of device may have an associated display device or screen, through which a document or any other information may be displayed to a user. The device(s) may have their own storage (hard disk, solid state drive, memory, etc.), and may also communicate with storage and functionality in a remote location, such as through a cloud computing platform 1510. Documents and other data may be transmitted between the devices and the cloud, and may also be stored locally on the device(s) itself.
B2C Implementation of Present Invention
An app developed in accordance with the teachings of the present invention may be developed using appropriate iOS, Android or other types of tools. If iOS or Android, the resulting app may appear in the iOS and/or Android store as a B2C app and will function accordingly:
    • The user downloads the app from Apple's iOS store or Android App Store.
    • The user will then began to fill out the information and use the app according to the flow chart of how the app process flows, as described in further detail below.
In the B2C app, in one embodiment, the system may use a cloud service (such as Amazon AWS cloud) to store all of their data.
Operation of the Present Invention in One Embodiment
While the present invention may be implemented in a variety of ways, sample screenshots in one embodiment from, for example, a smartphone operating in accordance with the present invention are depicted in FIGS. 16.1 to 16.64.
FIGS. 16.1 through 16.64 are described in further detail below. Again, the ordering and operations described below are simply illustrative—variations are possible while still operating within the scope of the present invention.
FIG. 16A. This depicts a sample home screen where the app of the present invention may be located on a user's phone.
FIG. 16B. This depicts a sample first screen that shows when the app is activated, for the user to begin using the app.
FIG. 16C. This is where the user logs in as a new user or an existing user, such as with their cell phone number or username.
FIG. 16D. The new user logs in with their cell phone number.
FIG. 16E. The new user receives an SMS text message with a verification code.
FIG. 16F. The new user enters the verification code.
FIG. 16G. The new user enters their first name, last name, and a username, as an example.
FIG. 16H. The user sets up their iris recognition, facial recognition, fingerprint pattern, voice recognition, and creates a pattern swipe.
FIG. 16I. This is where a user is notified that they should set up all of their security features if they have not done so.
FIG. 16J. This is where the user returns and uses pattern swipe to enter the app/system.
FIG. 16K. This is where the user returns and uses iris recognition to enter the present invention.
FIG. 16L. This is where the user returns and uses facial recognition to enter the present invention.
FIG. 16M. This is where the user returns and uses voice recognition to enter the present invention.
FIG. 16N: This is where the user returns and uses fingerprint recognition to enter the present invention.
FIG. 16O. This is the location where files are stored once a user creates and names each folder
FIG. 16P. This is the location where files are located once they are uploaded to the present invention.
FIG. 16Q. This is the homepage where Settings, Notifications, About, Security, Support, Storage, and Login using a desktop are located.
FIG. 16R. This is where the user's shared documents and files are located.
FIG. 16S. This is where recently uploaded documents are located.
FIG. 16T. This is where information about each file is shown, including, for example, the file type, size, owner, security, sharing, location, and who created the file.
FIG. 16U. This is where the user can push the plus button and scan a document, upload a file, or create a folder
FIG. 16V. This is just some sample text that may be displayed.
FIG. 16W. This is where the user can do a “Quick Share” and choose Level 1, Level 2, Level 3, or Level 4 for security. The user can also choose View only, Notifications, or offline mode for the recipient of the document to use before the document opens.
FIG. 16X. This is where the user can delete, rename, move, or star (tag as important) the file.
FIG. 16Y. This is where the document can be downloaded or not.
FIG. 16Z. This refers to the offline mode where the user can choose a time limit on the offline mode.
FIG. 16AA. This is where the user decides who to share the documents with.
FIG. 16BB. This shows the details of the document such as size, owner, file type, who created the document, location, security, and sharing of the document.
FIG. 16CC. This is where the user selects the level of security for each document sent first before a document will open for the receiver of the document. This is an example of how security levels may be used.
For example:
Level 1: One biometric, push notification
Level 2: Two biometrics, and a pattern swipe
Level 3: Three biometrics
Level 4: Four biometrics
FIG. 16DD. This is where the user may officially select the biometrics that will be used by the receiver to open the document.
FIG. 16EE. Sharing a file:
    • For example, the user selects if the receiver can:
      • 1. View: The receiver can only view the document and is not allowed to download the document
      • 2. Download: The receiver can download the document
      • 3. Share: The receiver can share the document with anyone he or she chooses
    • The user can select the following:
    • Location: The user can select a location where the receiver has access to the document. For example, if the user selects the address of the receiver's employer, then the receiver can only open the document at that address. The user can choose that the receiver can be in any location and the document will open.
    • WiFi IP ADDRESS: The user can choose an IP address that the receiver must use for the document to open. If the receiver is not using that IP address, then the document will not open. The user can also decide not to specify an IP address for the receiver to use and the receiver can open the document on any WiFi IP address.
    • Time Limit: The user can decide how long the user can have access to the document. For example, Unlimited time, 24 hours, 48 hours, 72 hours, etc.
    • Offline access: The user can choose if the receiver can receive documents offline when there is no internet available.
    • Notifications: The user can receive notifications for when files are shared, sent, or added.
FIG. 16FF. Select a biometric for the receiver to use before the file opens to the receiver.
FIG. 16GG. This is where the user clicks share or cancel on the file to be sent.
FIG. 16HH. This is where the user selects the time limit that a recipient can have access to the file that the user sends to them.
FIG. 16II. This is where the user selects the WiFi IP addresses for the recipient to use in order to access the document that the user sends to the recipient.
FIG. 16JJ. This is where the user selects the location of where the recipient can access the document that was sent.
FIG. 16KK. This is where the user selects if the receiver is allowed to View, Download or Share the document.
FIG. 16LL. This is where the user can rename a file.
FIG. 16MM. This is where the user can decide to delete a file
FIG. 16NN. This is where the user can move files into any folder that he or she decides.
FIG. 16OO. This is where the user may add their name, picture, and location.
FIG. 16PP. Notifications: This is where the user may specify to be notified when someone adds a file, or shares a file.
FIG. 16QQ. This is the About Us section that may tell the users about the app/system.
FIG. 16RR. This is where the user may use a QR Code from a website or any location where it's located, to use the present invention as a Web App.
FIG. 16SS. This is where the user may add their picture, name, location, phone number and username.
FIG. 16TT. This is where the present invention may show the security that the user set up to access the present invention, as well as safe WiFi IP addresses and safe locations.
FIG. 16UU. This is the notifications section when someone shares a file, or adds a file.
FIG. 16VV. This is the screen that shows when the authorized user has left the camera's view, and is away from the device.
FIG. 16WW. When an unauthorized viewer looks at the user's screen, the facial recognition recognizes the unauthorized user, and then the screen shows a warning sign indicating that multiple viewers looking at the user's screen.
FIG. 16XX. This screen is for the introduction of setting up fingerprint recognition.
FIG. 16YY. This is the screen where the authorized user sets up his or her fingerprint recognition.
FIG. 16ZZ. This is the screen where it shows that the authorized user has successfully set up fingerprint recognition.
FIG. 16AAA. This is the screen for the introduction on setting up iris recognition.
FIG. 16BBB. This is where the user sets up the iris recognition.
FIG. 16CCC. This is where it shows that iris recognition is successfully set up.
FIG. 16DDD. This is the introduction to setting up facial recognition.
FIG. 16EEE. This is where the user sets up the facial recognition.
FIG. 16FFF. This is where it shows that you successfully set up facial recognition.
FIG. 16GGG. This is the introduction of setting up swipe pattern.
FIG. 16HHH. This is where the user has set up the swipe pattern.
FIG. 16III. This is where the swipe pattern is confirmed.
FIG. 16JJJ. This is the introduction for setting up voice recognition.
FIG. 16KKK. This is where the user adds their voice recognition.
FIG. 16LLL: This is where the user has successfully set up the voice recognition.
B2B Implementation of the Present Invention, and Administration of Same
The app of the present invention may have an admin panel, for example for business clients. The admin panel may include a variety of features that will help security professionals who manage the company's documents have oversight of documents and files.
Below is a list of features that may in various embodiments be included in the admin panel:
    • The app may connect to a company's cloud provider(s) such as Azure, Box.com, AWS etc.
    • The present invention may only act as an interface for the enterprise and not hold any data, since the present invention only connects to their list of cloud providers.
    • The organization/business may have an Admin panel to use to control and monitor various administrative functions.
    • The Admin may be able to add or remove employee's access to present invention app, and also provide other functionalities, as described in further detail below.
The Admin may optionally receive notifications, as described below:
    • The Admin may receive a notification when the “Warning Multiple Viewers” sign is dismissed (security was paused) to show another person the document that was being viewed.
    • The Admin may receive a notification when the employee takes a screenshot of the document with their device.
    • Several failed attempts to log in with your face, voice, fingerprint, and iris.
    • Too many shares on a document—the Admin needs to know why is this document so “hot”?
    • Too many gigabytes being used by an employee
    • Unusual behavior: e.g., when there is a sudden spike in the usage of the present invention app. The employee may be searching for specific documents because he or she knows that they are leaving the job for another job.
Additionally, the below describes some security features that may be implemented:
    • Redirect files with an alias link: this is done to hide the location of where the file is located.
    • Onboarding process: users take a picture of their driver's license or passport to match the facial recognition that they use when they set up the present invention.
    • Sign documents inside of the app. The signatures are verified by the driver's license or passport.
      Further Implementation Details and Design Alternatives
It will be apparent to persons skilled in the relevant fields that various modules and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of computer instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.
Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system.
Suitable development platforms may be used to implement the various features of the present invention, whether implemented on a server, on the client side (e.g., as an app on a mobile device), or the like. Those skilled in the art will be familiar with such development platforms.
In another embodiment, features of the present invention may be implemented in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays, or the like. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).

Claims (24)

The invention claimed is:
1. A system for managing access to a display of a user device comprising:
a user device with a display;
a detection device for capturing a field of view in proximity to the display; and
a processor for:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document, wherein the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional biometric feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that the authorized user can perform on the document.
2. The system of claim 1, wherein the detection device is a camera.
3. The system of claim 1, wherein the processor activates the display by allowing the opened document to be visible on the display.
4. The system of claim 1, wherein the processor deactivates the display by preventing the opened document from being visible on the display.
5. The system of claim 1, further comprising:
asking the person to say a random sequence;
comparing the person's speech to a pattern in a data base to determine if the person is the authorized user;
if the person is the authorized user, activating the display to permit the authorized user to perform the functions that an authorized user can perform on the document, commensurate with the profile; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that an authorized user can perform on the document.
6. A method for managing access to a display of a user device, comprising the steps of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document, wherein the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
capturing a field of view in proximity to the display;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and that feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that an authorized user can perform on the document.
7. The method of claim 6, wherein the detection device is a camera.
8. The method of claim 6, wherein the display is activated by allowing the opened document to be visible on the display.
9. The method of claim 6, wherein the display is deactivated by preventing the opened document from being visible on the display.
10. A system for limiting access to a document on a user device to an authorized user, the system comprising:
a storage device for storing the document;
a processor for:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
generating a profile associated with the document, the document profile associated with at least one authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required to interact with the document in a selected manner;
capturing a field of view in proximity to the user device;
upon detecting in the field of view the at least one biometric feature of a person
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional biometric feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
and
allowing the authorized user to perform the functions that the authorized user can perform on the document.
11. The system of claim 10, wherein the processor further performs the steps of:
continuing to capture the field of view in proximity to the user device; and
upon detecting in the field of view a second user who is not considered the authorized user, disallowing the authorized user to interact with the document in the selected manner.
12. The system of claim 11, wherein the authorized user is disallowed by the processor to interact with the document in the selected manner, by disabling the user device.
13. The system of claim 10, wherein the authorized user is allowed by the processor to interact with the document in the selected manner, by enabling the user device.
14. The system of claim 10, wherein the authorized user is allowed by the processor to interact with the document via the user device in the selected manner, by opening, editing or sharing the document.
15. The system of claim 10, wherein the storage device is associated with the user device.
16. The system of claim 10, wherein the storage device and the user device are remote from each other, but connected via a communication channel.
17. A method for managing access to a document on a user device to an authorized user, the document stored on a storage device, comprising the steps of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
generating a profile associated with the document, the document profile associated with at least one authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required to interact with the document in a selected manner;
capturing a field of view in proximity to the user device;
upon detecting in the field of view a person
determining a first biometric feature of the person;
calculating a confidence level using
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of the user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document,
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and the additional feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user; and
allowing the authorized user to perform the function that an authorized user can perform on the document.
18. The method of claim 17, further comprising the steps of:
continuing to capture the field of view in proximity to the user device; and
upon detecting in the field of view a second user who is not considered the authorized user, disallowing the authorized user to interact with the document in the selected manner.
19. The method of claim 18, wherein the authorized user is disallowed to interact with the document in the selected manner, by disabling the user device.
20. The method of claim 17, wherein the authorized user is allowed to interact with the document in the selected manner, by enabling the user device.
21. The method of claim 17, wherein the authorized user is allowed to interact with the document via the user device in the selected manner, by opening, editing or sharing the document.
22. The method of claim 17, wherein the storage device is associated with the user device.
23. The method of claim 17, wherein the storage device and the user device are remote from each other, but are connected via a communication channel.
24. A non-transitory computer-readable medium with computer executable instructions embodied thereon for managing access to a display of a user device, the computer-executable instructions causing a computer to perform the process of:
creating a user profile associated with a user, wherein the user profile includes a plurality of biometric features associated with the user and a weight associated with each of the plurality of biometric features;
creating a document profile associated with a document on a user device, the document profile includes an authorized user, functions that the authorized user can perform on the document, and a minimum confidence level required for the user to be identified as the authorized user;
capturing a field of view in proximity to the display;
upon detecting in the field of view a person, checking the profile to determine if the person is the authorized user of the associated opened document,
capturing a field of view in proximity to the display;
upon detecting in the field of view a person,
determining a first biometric feature of the person;
calculating a confidence level using:
the first biometric feature of the person,
the plurality of biometric features from the user profile, and
the weight associated with each of the plurality of biometric features of each user profile;
if the calculated confidence level exceeds a predetermined threshold, identifying the person as the authorized user of the document;
if the calculated confidence level falls below the predetermined threshold, requesting the person to provide an additional biometric feature and re-calculating the confidence level with the additional biometric feature and that feature's weight until either the calculated confidence level exceeds the predetermined threshold or there are no more additional biometric features to be determined, in which case the user is not identified as the authorized user;
if the person is identified as the authorized user, activating the display to allow the authorized user to perform the functions that the authorized user can perform on the document; and
if the person is not the authorized user, deactivating the display to prevent the person from performing the functions that the authorized user can perform on the document.
US16/542,451 2018-08-21 2019-08-16 System and method for securely viewing, editing and sharing documents and other information Active 2040-03-04 US11341222B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/542,451 US11341222B1 (en) 2018-08-21 2019-08-16 System and method for securely viewing, editing and sharing documents and other information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862720543P 2018-08-21 2018-08-21
US16/542,451 US11341222B1 (en) 2018-08-21 2019-08-16 System and method for securely viewing, editing and sharing documents and other information

Publications (1)

Publication Number Publication Date
US11341222B1 true US11341222B1 (en) 2022-05-24

Family

ID=81656350

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/542,451 Active 2040-03-04 US11341222B1 (en) 2018-08-21 2019-08-16 System and method for securely viewing, editing and sharing documents and other information

Country Status (1)

Country Link
US (1) US11341222B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210089668A1 (en) * 2019-09-20 2021-03-25 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium
US20220114808A1 (en) * 2019-01-24 2022-04-14 Identy Inc. Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and mobile device for executing the method
US20220255923A1 (en) * 2021-02-05 2022-08-11 Cisco Technology, Inc. Collaboration application integration for user-identity verification
US11611590B1 (en) * 2019-12-09 2023-03-21 Proofpoint, Inc. System and methods for reducing the cybersecurity risk of an organization by verifying compliance status of vendors, products and services
US11908478B2 (en) 2021-08-04 2024-02-20 Q (Cue) Ltd. Determining speech from facial skin movements using a housing supported by ear or associated with an earphone
US20240071364A1 (en) * 2022-07-20 2024-02-29 Q (Cue) Ltd. Facilitating silent conversation
US12105785B2 (en) 2021-08-04 2024-10-01 Q (Cue) Ltd. Interpreting words prior to vocalization
US12130901B2 (en) 2023-11-16 2024-10-29 Q (Cue) Ltd. Personal presentation of prevocalization to improve articulation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020056046A1 (en) * 1998-03-09 2002-05-09 Dean A. Klein Method for automatically initiating a computer security and/or screen saver mode
US20040064728A1 (en) * 2002-09-30 2004-04-01 Scheurich Christoph E. Personal authentication method and apparatus sensing user vicinity
US20100205667A1 (en) * 2009-02-06 2010-08-12 Oculis Labs Video-Based Privacy Supporting System
US20110225202A1 (en) * 2007-08-21 2011-09-15 International Business Machines Corporation Multi-dimensional access control list
US20110307960A1 (en) * 2010-06-11 2011-12-15 Brian John Cepuran Systems, methods, and apparatus for securing user documents
US20160253559A1 (en) * 2015-02-27 2016-09-01 Fotonation Limited Optical system for an image acquisition device
US20170243020A1 (en) * 2016-02-19 2017-08-24 International Business Machines Corporation Proximity based content security
US20190034395A1 (en) * 2017-07-28 2019-01-31 Cisco Technology, Inc. Combining modalities for collaborating while editing and annotating files
US20190340373A1 (en) * 2018-05-03 2019-11-07 Citrix Systems, Inc. Control viewing access to documents in collaborative scenarios using facial recognition from webcams

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020056046A1 (en) * 1998-03-09 2002-05-09 Dean A. Klein Method for automatically initiating a computer security and/or screen saver mode
US20040064728A1 (en) * 2002-09-30 2004-04-01 Scheurich Christoph E. Personal authentication method and apparatus sensing user vicinity
US20110225202A1 (en) * 2007-08-21 2011-09-15 International Business Machines Corporation Multi-dimensional access control list
US20100205667A1 (en) * 2009-02-06 2010-08-12 Oculis Labs Video-Based Privacy Supporting System
US20110307960A1 (en) * 2010-06-11 2011-12-15 Brian John Cepuran Systems, methods, and apparatus for securing user documents
US20160253559A1 (en) * 2015-02-27 2016-09-01 Fotonation Limited Optical system for an image acquisition device
US20170243020A1 (en) * 2016-02-19 2017-08-24 International Business Machines Corporation Proximity based content security
US20190034395A1 (en) * 2017-07-28 2019-01-31 Cisco Technology, Inc. Combining modalities for collaborating while editing and annotating files
US20190340373A1 (en) * 2018-05-03 2019-11-07 Citrix Systems, Inc. Control viewing access to documents in collaborative scenarios using facial recognition from webcams

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220114808A1 (en) * 2019-01-24 2022-04-14 Identy Inc. Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and mobile device for executing the method
US20230394870A1 (en) * 2019-01-24 2023-12-07 Identy Inc. Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and mobile device for executing the method
US11875597B2 (en) * 2019-01-24 2024-01-16 Identy Inc. Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and mobile device for executing the method
US20210089668A1 (en) * 2019-09-20 2021-03-25 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium
US11611590B1 (en) * 2019-12-09 2023-03-21 Proofpoint, Inc. System and methods for reducing the cybersecurity risk of an organization by verifying compliance status of vendors, products and services
US20220255923A1 (en) * 2021-02-05 2022-08-11 Cisco Technology, Inc. Collaboration application integration for user-identity verification
US11908478B2 (en) 2021-08-04 2024-02-20 Q (Cue) Ltd. Determining speech from facial skin movements using a housing supported by ear or associated with an earphone
US11915705B2 (en) 2021-08-04 2024-02-27 Q (Cue) Ltd. Facial movements wake up wearable
US11922946B2 (en) 2021-08-04 2024-03-05 Q (Cue) Ltd. Speech transcription from facial skin movements
US12105785B2 (en) 2021-08-04 2024-10-01 Q (Cue) Ltd. Interpreting words prior to vocalization
US20240071364A1 (en) * 2022-07-20 2024-02-29 Q (Cue) Ltd. Facilitating silent conversation
US20240073219A1 (en) * 2022-07-20 2024-02-29 Q (Cue) Ltd. Using pattern analysis to provide continuous authentication
US12131739B2 (en) * 2023-11-07 2024-10-29 Q (Cue) Ltd. Using pattern analysis to provide continuous authentication
US12130901B2 (en) 2023-11-16 2024-10-29 Q (Cue) Ltd. Personal presentation of prevocalization to improve articulation

Similar Documents

Publication Publication Date Title
US11341222B1 (en) System and method for securely viewing, editing and sharing documents and other information
US11790066B2 (en) Systems and methods for private authentication with helper networks
Sharma et al. Reliable biometric authentication with privacy protection
EP2883189B1 (en) Spoof detection for biometric authentication
US10108858B2 (en) Texture features for biometric authentication
US11489866B2 (en) Systems and methods for private authentication with helper networks
Elhoseny et al. Multimodal biometric personal identification and verification
Gofman et al. Multimodal biometrics for enhanced mobile device security
US10922399B2 (en) Authentication verification using soft biometric traits
Conti et al. Biometric authentication overview: a fingerprint recognition sensor description
Sharma et al. Human recognition methods based on biometric technologies
Aparna et al. Data Anonymization on Biometric Security Using Iris Recognition Technology
Chen et al. Personal Authentication Technique with Human Iris Recognition using Wavelet Transform and Coding
Dere et al. Biometric Accreditation Adoption using Iris and Fingerprint: A Review
Upadhyaya 14 Advancements in Computer Vision for Biometrics
Vinodini et al. A SURVEY ON FACE RECOGNITION
Sakhare et al. M-banking verification using OTP and biometrics
Singh et al. Multimodal Biometric Authentication Parameters on Humanbody
Ali Access Windows by Iris Recognition

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE