GB2519571A - Audiovisual associative authentication method and related system - Google Patents
Audiovisual associative authentication method and related system Download PDFInfo
- Publication number
- GB2519571A GB2519571A GB1318876.8A GB201318876A GB2519571A GB 2519571 A GB2519571 A GB 2519571A GB 201318876 A GB201318876 A GB 201318876A GB 2519571 A GB2519571 A GB 2519571A
- Authority
- GB
- United Kingdom
- Prior art keywords
- user
- service
- cues
- terminal
- authentication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims description 44
- 230000004044 response Effects 0.000 claims abstract description 60
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000004891 communication Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 6
- 230000003028 elevating effect Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 abstract description 13
- 238000012545 processing Methods 0.000 abstract description 11
- 230000015654 memory Effects 0.000 description 19
- 230000009471 action Effects 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007519 figuring Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
- G10L17/24—Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/36—User authentication by graphic or iconic representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2103—Challenge-response
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2111—Location-sensitive, e.g. geographical location, GPS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0853—Network architectures or network communication protocols for network security for authentication of entities using an additional device, e.g. smartcard, SIM or a different communication terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
- H04L63/107—Network architectures or network communication protocols for network security for controlling access to devices or network resources wherein the security policies are location-dependent, e.g. entities privileges depend on current location or allowing specific operations only from locally connected terminals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/60—Context-dependent security
- H04W12/69—Identity-dependent
- H04W12/77—Graphical identity
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
Abstract
In a challenge-response authentication system, stored personal voiceprints (112, fig 1b) are linked with visual (202, fig. 2a) or audio cues and represented 146 to the user 102 who must provide the required utterance for capture and processing (via eg. a Java application on a mobile phone 102b) in order to match 150 a captured sound file (204). The authentication status of the user may then be elevated 152. The system may also use QR codes 140 to transfer dynamic identification data or location information to block certain addresses or countries.
Description
AUDIOVISUAL ASSOCIATIVE AUTHENTICATION METHOD
AND RELATED SYSTEM
FIELD OF THE INVENTION
Generally the invention pertains to computers and related communications infrastructures. In particular, however not exclusively, the invention concerns authentication to an electronic service.
BACKGROUND
Access control in conjunction with network services may imply user identification, which can be generally based on a variety of different approaches.
For example, three categories may be considered including anonymous, standard and strong identification. Regarding anonymous case, the service users do not have to be and are not identified. Standard, or normal', identification may refer to what the requestor for access knows, such as a password, or bears such as a physical security token. Such a token may include password-generating device (e.g. SecurlD TM), a list of one-time passwords, a smart card and a reader, or a one-time password transmitted to a mobile terminal. Further, strong identification may be based on a biometric property, particularly a biometrically measurable property, of a user, such as a fingerprint or retina, or a security token the transfer of which between persons is difficult, such as a mobile terminal including a PKI (Public Key Infrastructure) certificate requiring entering a PIN (Personal Identification Number) code upon each instance of use.
On the other hand, network service -related authentication, i.e. reliable identification, may also be implemented on several levels, e.g. on four levels, potentially including ulmecessary, weak, strongish, and strong authentication, wherein the strongish authentication, being stronger than weak, thus resides between the weak and strong options. If the user may remain anonymous, authentication is unnecessary. Weak authentication may refer to the use of single standard category identification means such as user if/password pair. Instead, strongish authentication may apply at least two standard identification measures utilizing different techniques. With strong authentication, at least one of the identification measures should be strong.
Notwithstanding the various advancements taken place during the last years in the context of user and service identification, authentication, and related secure data transfer, some defects still remain therewith and are next briefly and non-exhaustively reviewed with useful genera] background information.
Roughly, access control methods to network services include push and pull methods. In pull methods, a user may first identify oneself anonymously to a network service providing a login screen in return. The user may then type in the user ID and a corresponding password, whereupon he/she may directly access the service or be funneled into tile subsequent authentication phase. In push methods, a network server may first transmit infoniiation to tile e-mail address of the user in order to authorize accessing the service. Preferably only the user knows the password of the e-mail account.
1 5 The users are often reluctant to manually manage a plurality of user IDs and corresponding passwords. As a result, they may utilize the very same user ID and/or password in multiple services and/or use rather obvious and thus easy-to-crack words, numbers or expressions as passwords. Even if the access control management systems require using a strong password, i.e. hard-to-remember password, a risk that the user writes the password down increases considerably and the authentication level turns ultimately weak.
Yet, the utilization of a password is typically enabled by access control management entity that may also store the password locally. If the security of the data repository is later jeopardized, third parties may acquire all the passwords stored therein. Also if tile user forgets tile password or it has to be changed for sonic other reason, actions have to be taken by the user and optionally service provider. The user has to memorize the new password.
Further, the adoption of a personal, potentially network service-specific token such as a sniartcard, e.g. SecuriD and a r&ated reader device may require intensive training. Tile increase in the use of smart cards correspondingly raises the risk of thefts and provision of replacement cards. In case the personal tokens apply a common (distributed) secure algorithm, the theft of such algorithm would cause tremendous security issues and trigger massive update operations regarding the associated elements such as tokens in order to recover at least part of the original security.
E.g. in the context of cloud services such as cloud virtual-desktop services that may be regularly, e.g. daily, utilized by a user, the nowadays available access control procedures, especially identification and authentication solutions applied upon logging in to a service, are typically either inadequate in terms of the achieved data security or awkward from the standpoint of usability with reference to the aforesaid lengthy and strong, i.e. complex and thus hard-to-remember, passwords.
SUMMARY OF THE [NVENTION
The objective is to at east alleviate one or more problems described hereinabove regarding the usability and security issues, such as authentication, associated with the contemporary remote computer systems and related electronic services such as online services.
The objective is achieved by the system and method in accordance with the present invention. The suggested solution cleverly harnesses, among other factors, the associative memory of a user for electronic authentication as described in more detail hereinafter.
In an aspect of the present invention, a system for authenticating a user of an electronic service, comprises at least one server apparatus preferably provided with a processing entity and a memory entity for processing and storing data, respectively, and a data transfer entity for receiving and sending data, the system being configured to store, for a number of users, a p'urality of personal voiceprints each of which linked with a dedicated visual, audiovisual or audio cue, for challenge-response authentication of the users, pick, upon receipt of an authentication request associated with an existing user of said number of users, a number of cues for which there are voiceprints of the existing user stored, and provide the cues for representation to the user of the service as a challenge, receive sound data indicative of the voice response uttered by the user of the service to the represented cues, determine oii the basis of the soimd data, the represented cues and linked voiceprints, whether the response has been uttered by the existing user of said number of users, and provided that this seems to be the case, elevate the authentication status of the user of the service as the existing user, preferably regarding at least the current commumcation session.
Preferably the sound data is received from a mobile (terminal) device. The mobile device advantageously incorporates a microphone for capturing voice and encoding it into digital format. PreferaHy, the system maintains or has access to information Unking service/application users, or user id's, and mobile devices or mobile identities, e.g. TMET code or IMSI code (or other smart card), respectively, together. Optionally, mobile phone number could be utilized for the purpose.
Optionally, the cues are indicated to the user via a first terminal device such as a laptop or desktop computer. Service data in general andlor the cues may be provided as browser data such as web page data. Preferably such first terminal device includes or is at least connected to a display, a projector andlor a loudspeaker with necessary digital-to-analogue conversion means for the purpose.
Advantageously, the sound data is then obtained via a second terminal device, preferably via the aforementioned mobile device like a cellular phone, typically a smartphone, or a communications-enabled FDA/tablet, configured to capture the sound signal incorporating the user's voice (uttering the response to the cues) and convert it into digital sound data forwarded towards the system.
In some embodiments, the mobile device may be provided with a message, such as an SMS message, triggered by the systeni in order to veri that the user requiring voice-based authentication has the mobile device with him/her. For example, the user may have logged in an &ectronic service using certain user id that is associated with the mobile device. Such association may be dynamically controlled in the service settings by the user, for instance. In response to the message, the user has to trigger sending a reply, optionally via the same mobile device or via the first terminal, optionally provided with a secret such as password, or other acknowledgement linkable by the system with the user (id).
hi some embodiments, the cues may be represented visually and/or audibly utilizing e.g. a web browser at the first user terminal. Preferably, but not necessarily, the user provides the response using the second terminal such as a mobile terminal. The first terminal may refer to e.g. a desktop or laptop computer that may be personal or in a wider use. The second terminal, particifiarly if being a mobile terminal such as a smartphone, is typically a personal device associated with a certain user only, or at least rather limited group of users.
The system may be configured to link or associate the first and second terminals together relative to the ongoing session and authentication task. As a result, actions taken utilizing the second terminal may be linked with activity or response at the first terminal, e.g. browser thereat, by the system.
For example, the system may be configured to dynamically allocate a temporary id such as so-called sessioi id to the first terminal. This id may comprise a socket id. The first terminal may then be configured to indicate the id to the user and/or second terminal. For example, visual optionally coded representation, applying a QR (Quick Response) code, preferably including also other information such as user id (to the service) and/or domain information may be utilized. The second terminal may be then configured to wirelessly obtain the id. Preferably, the second terminal may read or scan, e.g. via camera and associated code reader software, the visual representation and decode it. Preferably using the same application, e.g. Java application, which is applied for receiving voice input ftom the user, is utilized for delivering the obtained id back towards the system, which then associates the two terminals and the session running in the first terminal (via the browser) together.
In some embodiments, the determination tasks may include a number of mapping, feature extraction, and/or comparison actions according to predetermined logic by which the match between the obtained sound data and existing voiceprint data relative to the indicated existing user is confirmed, i.e. the authentication is considered successful in the light of such voice-based authentication factor. In the case of no match, i.e. failed voice-r&ated authentication, the authentication status may remain as is or be lowered (or access completely denied).
hi some embodiments, elevating the gained (current) authentication status in connection with successful voice-based authentication may include at least one action selected from the group consisting of: enabling service access, enabling a new service feature, enabling the use of a new application, enabling a new communication method, and enabling the (user) adjustment of service settings or preferences.
In some embodiments, a visual cue defines a graphical image that is rendered on a display device for perception and visual inspection by the user. The image may define or comprise a graphical pattern, drawing or e.g. a digital photograph.
Preferably, the image is comp'ex enough so that the related (voice) association the user has, bears also necessary complexity and/or length in view of sound data analysis (too short or too simple voice input/voiceprint renders making reliable determinations difficult).
In some embodiments, audiovisual cue includes a video clip or video file with 1 5 associated integral or separate sound file(s). Alternatively or additionally, audiovisual cue may incorporate at least one graphical image and related sound.
Generally, video and audiovisual cues are indicated by e.g. a screenshot or other descriptive graphical image, andlor text, shown in the service UI. The image or a dedicated UI feature (e.g. button symbol) may be then utilized to activate the video playback by the user through clicking or otherwise selecting the image/feature, for instance. Alternatively, e.g. video cue(s) may playback automatically, optionally repeatedly.
In some embodiments, the audio cue includes sound typically in a form of at least one sound file that may be e.g. monophonic or stereophonic. The sound may represent music, sound scenery or landscape (e.g. jungle sounds, waterfall, city or traffic sounds, etc.), various noises or e.g. speech.
Audio cue may, despite of its non-graphical/invisible nature, still be associated with an image represented via the service UI. The image used to indicate an audio cue is preferably at least substanti&ly the same (i.e. non-unique) with aU audio cues, but anyhow enables visualizing an audio cue in the UI among e.g. visual or audiovisual cues, the cues being optionally rendered as a horizontal sequence of images (typically one image per cue), of the overall challenge. As with video or audiovisual cues, the image may be active and selecting, or !clicking it, advantageously then triggers the audible reproduction of the cue.
Alternatively or additionally, a common UI feature such as icon iriay be provided to trigger sequential reproduction of all audio and optionally audiovisual, cues.
In sonic embodiments and in the light of foregoing, basically all the cues may be indicated in a (horizontal) row or column, or using other configuration, via the service UI.
Visually distinguishable, clear ordering of the cues is advantageous as the user may immediately realize also the corresponding, correct order of corresponding cue-specific (sub-)responses in his/her overall voice response.
Video, audiovisual and/or audio cues may at least have a representative, generic or characterizing, graphical image associated with them as discussed above, while graphical (image) cues are preferably shown as such.
hi some embodiments, at least one cue is selected or provided, optionally created, by the user himself/herself A plurality of predetermined cues may be offered by the system to the user for review via the service Ui wherefrom the user may select one or more suitable, e.g. the most memorable, cues to be associated with voieeprints. Preferably, a plurality of cues is associated with each user.
A voiceprint, i.e. a voice-based fingerprint, may be determined for a cue based on a user's sound, or specifically voice, sample recorded and audibly exhibiting the user's association (preferably brainworm) relating to each particular cue. A voiceprint of the present invention thus advantageously characterizes, or is used to characterize, both the user (Litterer) and the spoken message (the cue or substantive personal association with the cLie) itself Recording may be effectuated using the audio input features available in a terminal device such as microphone, analogue-to-digital conversion means, encoder, etc. With different users, a number of same or similar cues may be generally utilized.
Obviously, the voiceprints associated with them are personal.
In some embodiments, the established service connection (access) is maintained based on a number of security measures the outcome of which is used to determine the friture of the service connection, i.e. let it remain, terminate it, or change it, for example. In sonic scenarios, fingerprint methodology may be applied. A user terminal may initially, upon service log-in, for instance, provide a fingerprint based on a number of predetermined elements, such as browser data such as version data, OS data such as version data, obtained Java entity data such as version data, and/or obtained executable data such as version data. Version data may indude ID data such as version identifier or generally the identifier (application or software name, for examp'e) of the associated element. The arrangement may be configured to request new fingerprint in response to an event such as a timer or other temporal event (timed requests, e.g. on a regular basis). Alternatively or additionally, the client may provide fingerprints independently based on timer and/or some other event, for instance.
In response to the received new fingerprint, the arrangement may utilize the most recent fingerprint and a number of earlier fingerprints, e.g. the initial one, in a procedure such as a comparison procedure. The procedure may be executed to determine the validity of the current access (user). For example, if the compared 1 5 fingerprints match, a positive outcome may be determined indicating no increased security risk and the connection may remain as is. A mismatch may trigger a further security procedure or terminating the connection.
In some embodiments, the system is location-aware advantageously in a sense it utilizes location information to authenticate the user. A number of predetermined allowed and/or non-allowed/blocked locations may be associated with each user of the arrangement. For example, the location may refer to at least one element selected from the group consisting of: address, network address, sub-network, IP (Internet Protocol) address, [P sub-network, cell, cell-iD, street address, one or more coordinates, OPS coordinates, GLONASS coordinates, district, town, country, continent, distance to a predetermined location, and direction from a predetermined location. Each of the aforesaid addresses may refer to an address range.
Failed location-based authentication may result in a failed overall authentication (denied access), or alternatively, a Urnited ifinctionality such as limited access to the service may be provided. The same apphes to potential other authentication factors. Each authentication factor may be associated with a characterizing weight (effect) in the authentication process.
In some embodiments, the system may be configured to transmit a code, preferably as browser data such as web page data, during a communication session associated with a predetermined user of the service for visualization and subsequent input by the user. Further, the system may be configured to receive data indicative of the inputted code and of the location of the terminal device applied for transmitting the data, determine on the basis of the data and predetermined locations associated with tile user whether the user currently is in allowed location, and provided that this seems to be the case on tile basis of the data, raise the gained authentication status of the user regarding at least the current conmuirnication session. Preferably the data is received from a mobile (terminal) device. Optionally, the code is indicated to the user via a first terminal device such as a laptop or desktop computer. Instead of a code dedicated for the purpose, e.g. the aforesaid temporary id such as socket id may be utilized in this context as well.
A certain location may be associated with a certain user by "knowing" the user, which may refer to optionally automatically profiling and learning the user via monitoring one's habits such as location and optionally movements. As a result, a number of common, or allowed, locations may be determined and subsequently utilized for authentication purposes. Additionally or alternatively, the user may manually register a number of allowed locations for utilizing the solution in the arrangement. Generally, in various embodiments of the present invention, knowing the user and/or his/her gear and utilizing the related information such as location information in connection with access control, conducting automated attacks such as different dictionary attacks against the service may be made more ftitile.
In some scenarios, the ocation of the user (terminal) and/or data route may be estimated, e.g. by the system, based on transit delay and/or round-trip d&ay. For example, delays r&ating to data packets may be compared with delays associated with a number of e.g. location-wise known references such as reference network nodes, which may include routers, servers, switches, firewalls, terminals, etc. Yet in a further, either supplementary or ahernative embodiment, the electronic service is a doud service (running in a cloud). Addition&ly or alternatively, tile service may arrange virtual desktop and/or remote desktop to the user, for instance.
In another aspect, a method for controlling access to an electronic service, such as a cloud virtual-desktop service or other online service, preferably utilizing multi-factor authentication, comprises storing, for a number of users, a plurality of personal voiceprints each of which Unked with a dedicated visual, audiovisual or audio cue, for challenge-response authentication of the users, picking, upon receipt of an authentication request associated with an existing user of said number of users, a number of cues for which there are voiceprints of the existing user stored, for representation to the user of the service as a challenge, receiving user response incorporating sound data indicative of the voice response uttered by the user of the service to the represented cues, determining on the basis of the sound data, the represented cues and linked voiceprints, whether the response has been uttered by the existing user of said nuniber of users, and provided that this seems to be the case, elevating the authentication status of the user of the service acknowledged as the existing user according to the determination, preferably regarding at least the current communication session.
The previously presented considerations concerning the various embodiments of the system may be flexibly applied to the embodiments of the method mutatis mutandis, and vice versa, as being appreciated by a skilled person.
The utility of the present invention follows from a plurality of issues depending on each particular embodiment. Cleverly, the associative memory of users and also a phenomenon relating to a memory concept often referred to as brainworms, or earworms, regarding things and related associations one seems to remembcr, basically reluctantly but still with ease (e.g. songs that are stuck inside one's mind/one cannot get out of his/her head), can be harnessed into utilization in the context of authentication together with voice recognition. One rather fundamental biometric property, i.e. voice, is exploited as an authentication factor together with features of speech (i.e. voice input message content) recognition. Also other factors, e.g. location data indicative of the location of the user (terminal), may be applied for authentication purposes.
Rather regularly people manage to associate different things like sounds, images, videos, etc. together autonomously or automatically and recall such, potentially complex and/or lengthy (advantageous properties in connection with authentication particularly if the related voice inputs and fingerprints exhibit similar characteristics in conjunction with the present invention) association easily after many years, even if the association as such was originally subconscious or at some occasions even undesired as the person in question sees it. By the present solution, users are provided with authentication challenge as a number of cues such as images, videos and/or sounds for which they themselves initi&ly determine the correct response they want to utilize in the future during authentication. Instead of hard-to-remember numerical or character based code strings, the user may simply associate each challenge with a first associative, personal response that comes into mind and apply that memory image in the forthcoming authentication events based on voice recognition, as for each cue a 1 5 voiceprint is recorded indicative of correct response, whereupon the user is required to repeat the voice response upon authentication when the cue is represented to him/her as a challenge.
Yet, a technically feasible, security enhancing procedure is offered for linking a number of terminals together from the standpoint of electronic service, related authentication and ongoing session.
The expression "a number of' refers herein to any positive integer starting from one (1), e.g. to one, two, or three.
The expression "a plur&ity of' refers herein to any positive integer starting from two (2), e.g. to two, three, or four.
The expression "data transfer" may refer to transmitting data, receiving data, or both, depending on the role(s) of a particular entity under analysis relative a data transfer action, i.e. a role of a sender, a role of a recipient, or both.
The terms "electronic service" and "electronic application" are herein utilized interchangeably.
The terms "a" and "an" do not denote a limitation of quantity, but denote the presence of at least one of the referenced item.
The terms "first" and "second" do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.
Different embodiments of the present invention are disclosed in the dependent claims.
BRIEF DESCRIPTION OF THE RELATED DRAWINGS
Next the invention is described in more detail with reference to the appended drawings in which Fig. la illustrates the concept of an embodiment of the present invention via both block and signaling diagram approaches relative to an embodiment thereof.
Fig. lb is a block diagrani representing an embodiment of selected internals of 1 5 the system according to the present invention.
Fig. 2a represents one example of service UI view in corniection with user authentication.
Fig. 2b represents a further example of service UI view in coirnection with user authentication.
Fig. 2c represents further example of service UI view in connection with user authentication.
Fig. 2d represents thrther example of service UT view in connection with user authentication.
Fig. 3 is a flow chart disclosing an embodiment of a method in accordance with the present invention.
DETAILED DESCRIPTION OF TUE EMBODIMENTS
Figure la illustrates an embodiment of the present invention. The embodiment niay be generally related, by way of example only, to the provision of a network-based or particularly online type electronic service such as a virtual desktop service or document delivery service, e.g. delivery of a bill including a notice of maturity regarding a bank loan. Entity 102 refers to the service user (recipient) and associated devices such as a desktop or laptop computer and/or a mobile device utilized for accessing the service in the role of a client, for instance. The device(s) preferably provide access to a network 108 such as the Internet. The mobile device, such as a mobile phone (e.g. a smartphone) or a PDA (personal digital assistant) may preferably be wirelessly coilllected to a compatible network, such as a cellular network. Preferably the Internet may be accessed via the mobile device as well. The mobile device may comprise a browser. Entity 1 06 refers to a system or network arrangement of a number of at east functionally connected devices such as servers. The communication between tile entities 102 and 106 may take place over the Internet and underlying teclmologies, for example. Preferably the elltity 106 is functionally also connected to a mobile network.
Indeed, in the context of the shown embodiment of the present invention, the user 102 of an electronic service 106 incorporating or at east utilizing an embodiment of the system in accordance with the present invention (these two terms being therefore used interchangeably hereinafter) is preferably associated with a first terminal device 1 02a such as a desktop or laptop computer, a thin-client or a tablet/hand-held computer provided with network 108 access, typically Internet access. Yet, the user 102 preferably has a second terminal 102b such as a mobile conmiunications device with him/her, advantageously being a smartphone or a corresponding device with applicable mobile subscription or other wireless connectivity enabling the device to transfer data e.g. between local applications and the Internet. Many contemporary and forthcoming higher end mobile terminals qualifying as sniartphones bear necessary capabilities for both e-mail and web surfing purposes among various other sophisticated features including e.g. camera with optional optical code, e.g. QR code, reader application. h most cases, such devices support a plurality of wireless communication technologies such as cellular and wireless local area network (WLAN) type technologies. A number of different, usually downloadable or carrier provided, such as memory card provided, software solutions, e.g. chent applications, may be run on these smart' terminal devices.
The potential users of the provided system and method include different network service providers, operators, cloud operators, virtual and/or remote desktop service providers, application/software manufacturers, financiffi institutions, companies, and individuals in the role of a service provider, intermediate entity, or end user, for example. The invention is thus generally applicable in a wide variety of different use scenarios and service/document delivery applications.
In some embodiments the service 106 may include customer portal service and the service data may correspondingly include customer portal data. Through the portal, the user 102 may inspect the available general data, company or other organization-related data or personal data such as data about rental assets, estate or other targets. Service access in general, and the access of certain features or sections thereof may require authentication. Multi-level authentication may be supported such that each level can be mapped to predetermined user rights regarding the service features. The rights may define the authentication level and optionally also user-specific rules for service usage and thereby allow feature usage, exclude feature usage, or limit the feature usage (e.g. allow related data inspection but prevent data manipulation), for instance.
lniti&ly, at 126 the system 106 may be ramped up and configured to offer predetermined service to the users, which may also include creation of user accounts, definition of related user rights, and provision of necessary authentication mechanism(s). Then, the user 102 may execute necessary registration procedures via his/her terminal(s) and establish a service user 1 5 account cultivated with mandatory or optional information such as user id, service password, e-mail address, personal terminal identification data (e.g. mobile phone number, IMEI code, IMSI code), and especially voiceprints in the light of the present invention. This obviously bi-directional information transfer between the user/user device(s) 102 and the system/service 106, requiring performing related activities at both end, is indicated by items 128, 130 in the figure.
Figure 2a visualizes the voiceprint creation in the light of possible related user experience. A number of potential cues, such as graphical elements, 202 may be first indicated to the user via the service UI 200. Advantageously, the user naturally links at least some cues with certain associations based on e.g. his/her memories and potentially brainworms so that association is easy to recall and unambiguous (only one association per cue; for example, upon seeing a graphical representation of a cause ship, the user always come up with memory relating to a trip to Caribbean, whereupon the natural association is Caribbean', which is then that user's voice response to the cue of a cruise ship).
Further infonnation 204 such as the size of a captured sound fi'e may also be shown. The user may optionally select a sub-set of all the indicated cues and/or provide (upload) cues of his/her own to the system for use during authentication in the thture. There is preferably a minimum size defined for the sub-set, i.e. number of cues, each user should be associated with. That could be three, five, six, nine, ten, or twelve cues, for example. Further, the sound sample to be used for creating the voiceprint. and/or as at least part of a voiceprint, may be defined a minimum acceptable duration iii terms of e.g. seconds.
As mentioned hereinearlier, the cues may be visLLai, audible, or a combination of both. Regarding the user associated cues, the user may then input, typically utter, his/tier voice response based on which the system determines the voiceprints, preferably at least one dedicated voiceprint corresponding to each cue in the sub-set. A voiceprint associated with a cue preferably characterizes both the voice and the spoken sound, or niessage, of the response. In other words, same message ater uttered by other user does not match with the voiceprint of the first user during the voice authentication phase even though the first user uttered the very same message to establish the fingerprint. On the other hand, a message uttered by the first user does not match the voiceprint established based on other message uttered by the first user.
For voice characterization, the system may be configured to extract a number of parameters describing the properties of the user's vocal tract, for example, and e.g. related formant frequencies. Such frequencies typically indicate the personal resonance frequencies of the vocal tract of the speaker.
Next, reverting to Fig. 1 a and switching over (indicated by the broken line in the figuxe) to a scenario in which the user has already set up a service account and wishes to authenticate to reach desired authentication status within the service 106, at 132 the user 102 may trigger the browser in the (first) terniinal and control it to connect to the target electronic service 106. Accordingly, the user may now thg into the service 106 using his/her service credentials or provide at east some means of identification thereto as indicated by items 132, 134.
The system managing the service 106 may comprise e.g. a Node.js server entity whereto/relative to which the web browser registers itself, whereupon the service side allocates a dynamic id such as a socket id or other session id and delivers it at 136 to the browser that indicates the id and option&ly other information such as domain, user id/name, etc. to the user via a display at item 138.
Figure 2b illustrates an embodiment of potential service UI features at this stage through a snapshot of UI view 200B. The dynamic id is shown both as a numeric code and as embedded hi a QR code at 208 to the user. Items 206 indicate the available authentication elements or factors, whereas data at 210 implies the current authentication level of the session with related information. Instead of QR code, some other matrix barcode or completely different visual representation could be utilized.
With reference to Figure Ia again, at 140 the code is read by using a camera-based code reader application, for instance, to the second terminal such as mobile terminal of the user. Then the mobile device is configured, using the same or other predetermined application, to transfer an indication of the obtained data such as dynamic id and data identifying the terminal or entity such as smart card therein, to the system 106, wherein a target entity such as socket.io entity that is contgured to operate as a (browser type) client to the Node.js server entity, forwards at least part of the data, including the dynamic id, to the Node.js server entity that can thus dynamically link, and preferably shall link, the particular first and second terminals to the same ongoing service (authentication) session at 142.
1 5 Also the subsequent data transfer activities, e.g. transfer of voice response, from the second terminal to the system may be at least partially implemented utilizing the same route and related technique(s). Different entities on the system side may be, in practical circumstances, be implemented by one or more physical devices such as servers. For example, a predetermined server device may implement the Node.js server whereas one other server device may implement the socket.io client entity.
Next, at 144 the system 106 fetches a number (potentially dynamically changing according to predetermined logic) of cues associated with the user accotmt initiffily indicated/used in the session and for which voiceprints are available. The cues may be basically random'y selected (and order-wise also randomly represented to the user). The cues are indicated (transferred) to the browser in the terminal that then represents them to the user at 146 e.g. via a display and/or audio reproduction means depending on the nature of the cues. E.g. Ajax (Asynchronous JavaScript and XML) and PHP (Hypertext Preprocessor) may be utilized for terminifi side browser control. Mutually the cues may be of the same or mixed type (e.g. one graphical image cue, one audio cue, and one video cue optionally with audio track).
As the user 102 perceives the cues as an authentication challenge, he/she provides the voice response preferably via the second terminal at 148 to the service 106 via a client application that may be the same application used for transferring the dynamic id forward. The client side application for the task may be a purpose-specific Java application, for example. liii Figure 2c, four graphical (image) cues are indicated at 212 in the service UI view 200C (browser view).
Being also visiHe in the figure is a plurality of service features at 214, some of which are greyed out, i.e. non-active features, due to the current insufficient authentication level. Indeed, a service or particulady service application or UI feature may be potentially associated with a certain minimum security level required for access.
Automatic expiration time for the session may also be indicated via the UI.
Preferably, a session about to expire or expired may be renewed by repeated/new authenticati on.
In Figure Ia, at 150 the service 106 analyzes the obtained user response relative to the cues against the voiceprints using predetermined matching technique(s) and/or algorithms. In primary embodiments, the input order of (sub-)responses corresponding to individual cues in the overall response should match the order in which cues were represented in the service UI (e.g. in a row, from left to right). In some other embodiments, the system 106 may, however, be configured to analyze whether the order of sub-responses matches the order of cues given, or at least to tiy different ordering(s). Optionally, the system 106 may be configured to rearrange the sub-responses relative to the cues to obtain e.g. better voiceprint matching result during the analysis.
When the response (e.g. parameter(s) derived therefrom) matches with the voiceprints sufficiently according to predetermined logic, the voice authentication procedure may be considered successful, and the authentication eve may be scaled (typically raised) accordingly at 152. On the other hand, if the voice-based authentication fails (non-match), the authentication status may be left intact or lowered, for instance. Outcome of such authentication procedure is signaled to the user (preferably at least to the first terminal, potentially both) for review e.g. as an authentication status message via the service UI at 154. New features may be made available to the user in the service UI.
Figure 2d depicts a possible service UT view 200D after successful voice authentication. Explicit indication of the outcome of the authentication procedure is provided at 218 by authentication status message and as an implicit indication thereof, more service features 216 have been made available to the user (not greyed anymore, which the user will immediately recognize).
hi soirie embodiments, location information may be optionally utilized in tile authentication process as such. In one embodiment, the server 106 and/or other entities external to the user's 102 terminal gear may be configured to locate one or more of the terminals the user 102 applies for communicating with the service 106. Alternatively or additionally, the terminal devices may bear an own role in the positioning process and execute at least part of the necessary positioning actions locally. Actions required to position a terminal iriay be shared between the terminal(s) and at east one external entity.
For instance, address information may be used in the positioning process to deduce the location of the particular terminal in question (see Figures 2b-2d wherein IP location has been identified as one applied authentication/identification criterion). Somewhat typically, terminal or access network addresses such as IP addresses are at least loosely associated with physical locations so that the address-based locating is at least limitedly possible.
hi connection with mobile devices, many other options are also available including roaming signal and data transmission -based positioning. For example, by checking the ID of the base station(s) the mobile device is conmiunicating with, at least approximate location of the mobile device may be obtained. Yet, through more comprehensive signal analysis, such as TOA (Time-Of-Arrival), OTD (Observed-Time-Difference), or AOA (Angle-Of-Arrival), the mobile device may be located.
In some embodiments, a satellite navigation receiver, such as a UPS (Globifi Positioning System) or GLONASS (GLObal Navigation Satellite System), in connection with a terminal device may be exploited. The terminal may share the locally received satellite information with external entities as such or in cultivated form (e.g. ready-determined coordinates based on received satellite signal(s)). Further, data entity such as data packet transit tinies or RTT times may be monitored, if possible, e.g. in r&ation to both the monitored user/terminal and e.g. location-wise known reference entities as described hereinbefore in order to assess the location of the user/terminal by associated comparison.
On the basis of the terminal location, the system 106 may then introduce a thrther factor, i.e. a location -based factor, to the authentication procedure and verify, whether the current location of the terminal in question matches with predetermined location information defining a number of allowed locations and/or bamied locations in the light of the service andlor document access.
Depending on the embodiment, the status of the location-based factor may be evaluated prior to the evaluation of the fulfifirnent of other authentication factors, iii conjunction with them, or as a final check before authorizing the user to access the service andlor &ectronic document.
Figure lb shows, at 130, a block diagram illustrating the selected internals of an embodiment of the systeni presented herein. The system 106 may incorporate a number of at least fttnctionally connected servers, and typically at least one device such as a server or a corresponding entity with necessary communications, computational and memory capacity is induded in the system.
A skilled person will naturally realize that terminal devices such as a mobile terminal or a desktop type computer terminal utilized in connection with the present invention could generally include same or similar elements. In some 1 5 embodiments, also a number of terminals, e.g. aforesaid first and/or second terminal, may be included in the system.
The system device(s) 106 is/are typically provided with one or more processing devices capable of processing instructions and other data, such as one or more microprocessors, micro-controllers, DSP' s (digital signal processor), programmable logic chips, etc. The processing entity 120 may thus, as a flmctional entity, comprise a plurality of mutually co-operating processors and/or a number of sub-processors connected to a central processing unit, for instance.
The processing entity 120 may be configured to execute the code stored in a memory 122, which may refer to instructions and data relative to the software ogic and software architecture for controlling the system 106. The processing entity 120 may at least partially execute and/or manage the execution of the aforesaid receiving, sending, determining, and/or enabling tasks.
Similarly, the memory entity 122 may be divided between one or more physical memory chips or other memory elements. The memory 122 may store program code and other data such as user contact information, electronic documents, various service data etc. The memory 122 may further refer to and include other storage media such as a preferably detachable memory card, a floppy disc, a CD-ROM, or a fixed storage medium such as a hard drive. The memory 122 may be non-volatile, e.g. ROM (Read Only Memory), and/or volatile, e.g. RAM (Random Access Memoiy), by nature. Software (product) may be provided on a carrier medium such as a memory card, a memory stick, an optical disc (e.g. CD-ROM or DVD), or some other memory carrier.
The UI (user interface) 124, 124B may comprise a disp'ay or a data projector 124, and keyboardlkeypad or other applicable user (control) input entity I 24B such as a touch screen and/or a voice control input, or a number of separate keys, buttons, knobs, switches, a touchpad, a joystick, andlor a mouse, configured to provide the user of the system with practicable data visualization and device control means, respectively. The UI may include one or more loudspeakers and associated circuitry such as D!A (digit&-to-analogue) converter(s) for sound output, and optionally a microphone with A/D converter for sound input (obviously the terminal device capturing voice input from the user at least has one, or external loudspeaker(s), earphones, and or microphone(s) may be utilized thereat for which purpose the UI preferably contains suitable wired or wireless 1 5 (e.g. Bluetooth) interfacing means in the terminal). A printer may be included in the arrangement for providing more permanent output.
The system 106 further comprises a data interface 126 such as a number of wired and/or wireless transmitters, receivers, and/or transceivers for communication with other devices such as terminals and/or network infrastructure(s). For example, an integrated or a removable network adapter may be provided. Non-limiting examples of the generally applicable technologies include WLAN (Wireless LAN, wireless local area network), LAN, WiFi, Ethernet, USB (Universal Serial Bus), GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), EDGE (Enhanced Data rates for Globifi Evolution), U MTS (Univers& Mobile T&ecomrnunications System), WCDMA (wideband code division multiple access), CDMA2000, PDC (Personal Digital Cellular), PHS (Personal Handy-phone System), and Bluetooth. Some teclrnologies may be supported by the elements of the system as such whereas sonic others (e.g. cell network connectivity) are provided by external, fttnctionally connected entities.
It is clear to a skilled person that the system 106 may comprise nwnerous additional functional and/or structural elements for providing advantageous communication, processing or other features, whereupon this disclosure is not to be construed as limiting the presence of the additional elements in any maimer.
Entity 128 refers to such additional element(s) found useful depending on the embodiment.
At 1 30B, potential functional or logical entities implemented by the system 106 (mostly by processing element(s) 120, memory element(s) 122 and communications &ement(s) 126) for voice authentication are indicated. Profiler 110 may establish tile cue-associated voiceprints for the users based on the voice input by the users. The input may include speech or generally voice samples originally captured by user terminal(s) and funneled to the profiler 110 for voiceprint generation including e.g. feature extraction. Element 112 refers to a voiceprint repository 112 that may, in practice, contain a number of databases or 1 0 other data structures for maintaining the personal voiceprints determined for the cues based on voice input by the user(s).
Voiceprint data is personal (user account or user id related) and characterizes correct voice response to each cue (in the cue sub-set used for authenticating that 1 5 particular user). Voiceprint data may indicate, as already alluded hereinbefore, e.g. fundamental frequency data, vocal tract resonance(s) data, duration/temporal data, loudness/intensity data, etc. Voiceprint data may indicate personal (physiological) properties of the user 102 and characteristics of received sample data (thus advantageously characterizing also the substance or message of the input) obtained during the voiceprint generation procedure. In that sense, the voice recognition engine used in accordance with the present invention may also incorporate characteristics of speech recognition.
Analyzer 114 may take care of substantially real-time matching or generally analysis of voice input and already existing voiceprints. Such analysis may indude a number of comparisons according to predetermined logic for figuring out whether the speaker/utterer really is the user initially indicated to the system.
In some embodiments, profiler 110 and analyzer 114 may be logically performed by a common entity due to e.g. similarities between the executed associated tasks. Authentication entity 116 may generally control the execution of authenticati on procedure(s), determine cues for an authentication task, raise/lower permanent or session-specific authentication levels based on the outcome thereof, and control e.g. data transfer with terminal devices and network infrastructure(s) including various elements.
Regarding certain embodiments with additional location-based authentication, the system 106 may provide a dedicated location(ing) id, a geokey', to the user 102 preferably through browser data such as service view, e.g. a login/authentication view or a portal view. The user 102 may then notice the (visualized) ID among the service data as a numeric code or generally a string of optionally predetermined length. The ID may be dynamic such as session-specific and/or for one-time use on'y. In some embodiments, the location id may be combined with the session id (or a common id be used) or generally with data provided by the system for voice authentication e.g. via machine readable optical code like the QR code.
The user 102 may input or read the code to the (second) terminal, after which the application installed thereat, acquires location data according to predetermined ogic based on available positioning options. Preferably, the location data is acquired in real-time or near real-time fashion upon receipt of the id to be current. For example, the device may contain a satellite receiver such as UPS or GLONASS receiver through which location data may be obtained. In addition, 1 5 the device may utilize network and related signal(s) for obtaining location data such as data provided by cellular network and/or short-range wireless network, optionally WLAN. Network-assisted positioning may be used. The application may be configured to utilize available interfaces provided with the mobile operating system for acquiring the positioning data.
Location data such as longitude information, latitude information, accuracy or error estimate, the id itself or data derived therefrom, and/or time code (or time stamp) may be then collected and transmitted to the system 106. Preferably at least part of the data is encrypted. Optionally, at least part of the above data elements may be utilized for determining a hash by means of a secret or asymmetric key, for example, in which case at least the hash is transmitted.
I-ITTPS may be utilized for the secured transfer. The system 106 receives and optionally processes such as decodes the data. Subsequently, the system 106 may verir the current location of the user 102, as indicated by the obtained location data, against predetermined data indicative of e.g. allowed location(s). The resolution of the obtained data and/or related measurement error estimate may be utilized to adapt the decision-making. For example, in the case of a arger error/ worse positioning accuracy, more tolerance may be allowed in verification process, and vice versa.
In one embodiment, the system 106 is configured to maintain data about allowed (and/or rejected) user locations through utilization of polygon data, i.e. geo-referenced polygon data. For example, a number of allowed postal areas represented by tile corresponding polygons may have been associated with each user. The obtained location data may be mapped to a corresponding postal area polygon that is then searched from the list of allowed postal area polygons. In such an embodiment, the aforesaid adaptation may be realized by stretching or shrinking the postal area polygon boundaries, for instance.
hi the case of a positive outcome (allowed location detected), the system 106 may again update the authentication, or generally security', status of the user 102 accordingly and typically raise it. In practice, the user 102 may be provided with enhanced access rights to service features such as payment/finance components, higher security documents, etc. as reviewed above. Each user may be associated with session-based information such as session record dynamically keeping track of, among potential other issues, the user rights emerging from the successful authentication actions. A notification of the raised access security 1 5 level or failed authentication may be transmitted to the user via mobile application and/or through browser data. The system 106 may update the service parameters for the session automatically and provide an updated service view such as browser view to the user's terminal.
Figure 3 discloses, by way of example only, a method flow diagram in accordance with an embodiment of the present invention.
At 302 the system of the present invention is obtained and configured, for example through loading and execution of related software, for managing the electronic service and related authentication mechanism(s). Further, for users willing or obliged to use voice authentication, the voiceprints shall be established as described in this text earlier. For example, the system may be trained by the user such that the user Litters the desired response (association) to each cue in his/her (sub-)set of cues, whereupon the system extracts or derives the voiceprints based on the voice input. Further, the user may be asked to provide sonic general or specific voice input that is not directly associated with any voiceprint. Using that voice input, the system may gener&ly mod& the user-specific voice and/or speech parameters later applied in voice-based authentication and voiceprint matching, for example.
At 304, an indication of a required authentication, such as voice authentication request, is received from a user via the service UI (e.g. browser-based UI) or e.g. via dedicated application. Related procedures potentially incorporating linking first and second terminals of the user relative to the current service session have been already discussed in this text.
At 306, a number of cues (for which voiceprint is available by the indicated user) are determined or selected preferably from a larger group thereof The s&ection may be random, alternating (subsequent selections preferably contain different cue(s)), or of some other type. The number of cues per authentication operation may be dynamically selected by the system as well. For example, if a previous voice authentication procedure regarding the same user identity failed, the next one could contain more (or less) cues, and potentially vice versa. Also the status of other authentication factor(s) may be configured to affect the number. For example, if the user has already been authenticated using some other authentication factor or element, e.g. location, the number of cues could be scaled lower than in situation wherein overall authentication status of the user is 1 5 weaker.
At 308, the cues are represented to the user, i.e. at least indication of them is transmitted by the system to the (first) user terminal potentially with instructions regarding visual and/or audible reproduction thereof e.g. via a browser.
Preferably, the cues are represented in easily noticeable and recognizable order so that the response thereto may be provided as naturally as s possible following the same order. For example, graphical cues may be represented in series extending from left to right via the service UI, and the user may provide the voice response acknowledging each cue in the same, natural order advantageously without a need to provide any separate, explicit contrifi command for identifying the target cue during the voice input stage. The user may utter the response to each cue one after each other by just keeping a brief pause in between so that cite-specific (sub-)responses may be distinguished from each other (and associated with proper cue) in the overall response afterwards by the terminal or the system based on the pauses. Alternatively, the user niay explicitly indicate via the UI, through cue-specific icon/symbol selection, for instance, to which cue he/she is next providing the voice response.
Indeed at 310, the voice response to the challenged formed by the cues, such as graphical images, videos, and/or audio files, is provided by the user and forwarded via the terminal to the system. The sound data forwarded may include digital sound samples, such as so-called raw or PCM samples, or e.g. a heavily parameterized compressed representation of the capture voice.
At 312, tile obtained voice response data is analyzed agahist the corresponding persona] (user-specific) voiceprints of the represented cues. The analysis tasks may include different matching and comparison actions foflowing a predetermined logic. The logic may apply fixed thresh&d(s) for making decisions (successful authentication, failed authentication), or alternatively dynamic criteria may be applied. For instance, if e.g. heavy background noise is detected in the obtained sound data, criteria could be loosened.
At 314, the authentication status or leve] associated with the user is updated accordingly (raised, towered, or left as is).
At 316, the method execution is ended.
1 5 A computer program, comprising a code means adapted, when run on a coniputer, to execute an embodiment of the desired method steps in accordance with the present invention, may be provided. A carrier medium such as an optical disc, floppy disc, or a memory card, or other non-transitory carrier medium comprising the computer program may frirther be provided. The program may be thrther delivered over a communication network and generally over a communication channel.
Consequently, a skilled person may on the basis of this disclosure and general knowledge apply the provided teachings in order to implement the scope of the present invention as defined by the appended claims in each particular use case with necessary modifications, deletions, and additions.
Claims (16)
- Ctaims 1. An electronic system (106, 130) for authenticating a user of an electronic service, said system comprising at least one server apparatus, the system being configured to store (122, 112, 200), for a number of users, a plurality of personal voiceprints (204) each of which being linked with a dedicated visual, audiovisual or audio cue (202), for challenge-response authentication of the users, pick (116, 200C, 142, 144), upon receipt of an authentication request associated with an existing user of said number of users, a number of cues (212) for which there are voiceprints of the existing user stored, and provide the cues for representation (144, 126) to the user of the service as a challenge, receive (126, 148) sound data indicative of the voice response uttered by the user of the service to the represented cues, determine (114, 150) on the basis of the sound data, the represented cues and Unked voiceprints, whether tile response has been uttered by the existing user of said number of users, and provided that this seems to be the case, elevate (116, 152, 200D, 218, 216) the authentication status of the user of the service as the existing user, preferably regarding at least the current conmiunication session.
- 2. The system of claim 1, wherein at least one cue comprises a graphical image (202) or video to be shown to the user via a dispiay of a terminal device.
- 3. The system of any preceding claim, wherein at least cue comprises an audio file, optionally music or soimd scenery file, to be audibly reproduced to the user.
- 4. The system of any preceding claim, further configured to initially determine a personal voiceprint for a cue based on a voice response of the user to the cue (128, 130, 200).
- 5. The system of any preceding claim, configured to link a first user terminal (102a) and a second user terminal (102b) with the ongoing service session of the user based on a dynamic id that is sent by the system (136) to the first terminifi and returned by the second terrnin& (142).
- 6. The system of claim 5, comprising a Nodejs server configured to remotely control web browser based user interface (UI) of the service at the first user terminal (1 02a), and allocate the dynamic id, optionally socket id, thereto, wherein the system is further configured to instruct (136) the first terminal to 1 0 display the socket id visually in the service UI optionally via a two-dimensional graphical code.
- 7. The system of claim 6, wherein the code comprises a QR (Quick Response) code.
- 8. The system of claim 6 or 7, comprising socket.io entity configured to act as a client to the Node.js server, to receive the socket id transmitted by the second user tenmnal and forward it to the Nodejs server for linking the first and second user terminals and the current service session of the user together.
- 9. The system of any preceding claim, further comprising a first user terminal (102a) for accessing the service and reproducing the cues and optionally a dynamic id allocated by the system to the user.
- 10. The system of claim 9, farther comprising a second user terminal (1 02b), preferably a mobile device, comprising application, optionally Java application, for capturing the voice response by the user.
- 11. The system of claim 10, wherein the second user terminal is thrther configured to obtain a dynamic id allocated to the first terminal, preferably browser thereat, and signal it (140) to said at least one server of the system.
- 12. The system of claim 11, wherein the second user terminal is configured to optically read a two-dimensional code representation of the id shown on the display of the first terminal.
- 13. The system of any preceding claim, configured to further utilize the estimated location of the user as an authentication factor, wherein the location estimate is based on the location data obtained relative to a user terminal.
- 14. A method for controlling access to an electronic service, comprising storing, for a number of users, a plurality of personal voiceprints each of which linked with a dedicated visual, audiovisual or audio cue, for challenge-response authentication of the users (302), picking, upon receipt of an authentication request associated with an existing user of said number of users, a number of cues for which there are voiceprints of 1 0 the existing user stored, to be represented to the user of the service as a challenge (304, 306, 308), receiving user response incorporating sound data indicative of the voice response uttered by the user of the service to the represented cues (310), determining on the basis of tile sound data, the represented cues and linked voiceprints, whether the response has been uttered by the existing user of said number of users, and provided that this seems to be the case (312), elevating the authentication status of the user of the service acknowledged as the existing user according to the determination, preferably regarding at ieast the current commumcation session (314).
- 15. A computer program comprising code means adapted to, when run on a computer, to execute the method items of claim 14.
- 16. A carrier medium comprising the computer program according to claim 15.Amended claims have been filed as follows:-Claims 1. An electronic system (106, 130) for authenticating a user of an electronic service, said system comprising at least one server apparatus, the system being configured to store (122, 112, 200), for a number of users, a plurality of personal voiceprints (204) each of which being linked with a dedicated visual, audiovisual or audio cue (202), for challenge-response authentication of the users, wherein the cues 1 0 are user-selected, user-provided or user-created, pick (116, 200C, 142, 144), upon receipt of an authentication request associated with an existing user of said number of users, a number of cues (212) for which there are voiceprints of the existing user stored, and provide the cues for representation (144, 126) to the user of the service as a challenge, receive (126, 148) sound data indicative of the voice response uttered by the user r of the service to the represented cues, r determine (114, ISO) on the basis of the sound data, the represented cues and liiiked voiceprints, whether the response has heel uttered by the existing user of said number of users, and provided that this seems to be the case, elevate (116, 152, 200D, 218, 216) the authentication status of the user of the service as the existing user, preferably regarding at least the current communication session.2. The system of claim 1, wherein at least one cue comprises a graphical image (202) or video to be shown to the user via a display of a tenninal device.3. The system of any preceding claim, wherein at least cue comprises an audio file, optionally music or sound scenery file, to be audibly reproduced to the user.4. The system of any preceding claim, farther configured to initially determine a personal voiceprint for a cue based on a voice response of the user to the cue (128, 130, 200).5. The system of any preceding claim, configured to link a first user terminal (102a) and a second user terminal (102b) with the ongoing service session of the user based on a dynamic id that is sent by the system (136) to the first terminal and returned by the second terminal (142).6. The system of claim 5, comprising a Node.js server configured to remotely control web browser based user interface (UI) of the service at the first user terminal (102a), and allocate the dynamic id, optionally socket id, thereto, wherein the system is further configured to instruct (136) the first terminal to 1 0 display the socket id visually in the service UI optionally via a two-dimensional graphical code.7. The system of claim 6, wherein the code compnses a QR (Quick Response) code. 8. The system of claim 6 or 7, comprising socket.io entity configured to act as a client to the Node.js server, to receive the socket id transmitted by the second user tennmal and forward it to the Node.js server for linkrng the first and second user terminals and the current service session of the user together.Q!) 9. The system of any preceding claim, further comprising a first user terminal (102a) for accessing the service and reproducing the cues and optionally a dynamic id allocated by the system to the user.10. The system of claim 9, fhrther comprising a second user tenninal (102b), preferably a mobile device, comprising application, optionally Java application, for capturing the voice response by the user.11. The system of claim 10, wherein the second user terminal is thrther configured to obtain a dynamic id allocated to the first terminal, preferably browser thereat, and signal it (140) to said at least one server of the system.12. The system of claim 1 I, wherein the second user terminal is configured to optically read a two-dimensional code representation of the id shown on the display of the first terminal.13. The system of any preceding claim, configured to further utilize the estimated location of the user as an authentication factor, wherein the location estimate is based on the location data obtained relative to a user terminal.14. A method for controlling access to an electronic service, comprising storing, for a number of users, a plurality of personal voiceprints each of which linked with a dedicated visual, audiovisual or audio cue, for challenge-response authentication of the users (302), wherein the cues are user-selected, user-provided or user-created, picking, upon receipt of an authentication request associated with an existing 1 0 user of said number of users, a number of cues for which there are voicepnnts of the existing user stored, to be represented to the user of the service as a challenge (304, 306, 308), receiving user response incorporating sound data indicative of the voice response uttered by the user of the service to the represented cues (310), determining on the basis of the sound data, the represented cues and linked r voiceprints, whether the response has been uttered by the existing user of said number of users, and provided that this seems to be the case (312), O) 20 elevating the authentication status of the user of the service acknowledged as the r existing user according to the determmation, preferably regarding at least the current communication session (314).15. A computer program comprising code means adapted to, when run on a computer, to execute the method items of claim 14.16. A carrier medium comprising the computer program according to claim 15.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1318876.8A GB2519571A (en) | 2013-10-25 | 2013-10-25 | Audiovisual associative authentication method and related system |
GB1320287.4A GB2519609B (en) | 2013-10-25 | 2013-11-18 | Audiovisual associative authentication method and related system |
PCT/FI2014/050807 WO2015059365A1 (en) | 2013-10-25 | 2014-10-27 | Audiovisual -->associative --> authentication --> method and related system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1318876.8A GB2519571A (en) | 2013-10-25 | 2013-10-25 | Audiovisual associative authentication method and related system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201318876D0 GB201318876D0 (en) | 2013-12-11 |
GB2519571A true GB2519571A (en) | 2015-04-29 |
Family
ID=49767156
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1318876.8A Withdrawn GB2519571A (en) | 2013-10-25 | 2013-10-25 | Audiovisual associative authentication method and related system |
GB1320287.4A Expired - Fee Related GB2519609B (en) | 2013-10-25 | 2013-11-18 | Audiovisual associative authentication method and related system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1320287.4A Expired - Fee Related GB2519609B (en) | 2013-10-25 | 2013-11-18 | Audiovisual associative authentication method and related system |
Country Status (2)
Country | Link |
---|---|
GB (2) | GB2519571A (en) |
WO (1) | WO2015059365A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108650266B (en) * | 2018-05-14 | 2020-02-18 | 平安科技(深圳)有限公司 | Server, voiceprint verification method and storage medium |
US11443030B2 (en) * | 2019-06-10 | 2022-09-13 | Sherman Quackenbush Mohler | Method to encode and decode otherwise unrecorded private credentials, terms, phrases, or sentences |
US11669602B2 (en) * | 2019-07-29 | 2023-06-06 | International Business Machines Corporation | Management of securable computing resources |
US11531787B2 (en) | 2019-07-29 | 2022-12-20 | International Business Machines Corporation | Management of securable computing resources |
CN112346888B (en) * | 2020-11-04 | 2024-06-21 | 网易(杭州)网络有限公司 | Data communication method and device based on software application and server equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1074974A2 (en) * | 1999-06-07 | 2001-02-07 | Nokia Mobile Phones Ltd. | Secure wireless communication user identification by voice recognition |
GB2503292A (en) * | 2012-06-18 | 2013-12-25 | Aplcomp Oy | Voice-based user authentication |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5915001A (en) * | 1996-11-14 | 1999-06-22 | Vois Corporation | System and method for providing and using universally accessible voice and speech data files |
US7636855B2 (en) * | 2004-01-30 | 2009-12-22 | Panasonic Corporation | Multiple choice challenge-response user authorization system and method |
US8255223B2 (en) * | 2004-12-03 | 2012-08-28 | Microsoft Corporation | User authentication by combining speaker verification and reverse turing test |
US7558964B2 (en) * | 2005-09-13 | 2009-07-07 | International Business Machines Corporation | Cued one-time passwords |
US8189878B2 (en) * | 2007-11-07 | 2012-05-29 | Verizon Patent And Licensing Inc. | Multifactor multimedia biometric authentication |
US8769669B2 (en) * | 2012-02-03 | 2014-07-01 | Futurewei Technologies, Inc. | Method and apparatus to authenticate a user to a mobile device using mnemonic based digital signatures |
-
2013
- 2013-10-25 GB GB1318876.8A patent/GB2519571A/en not_active Withdrawn
- 2013-11-18 GB GB1320287.4A patent/GB2519609B/en not_active Expired - Fee Related
-
2014
- 2014-10-27 WO PCT/FI2014/050807 patent/WO2015059365A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1074974A2 (en) * | 1999-06-07 | 2001-02-07 | Nokia Mobile Phones Ltd. | Secure wireless communication user identification by voice recognition |
GB2503292A (en) * | 2012-06-18 | 2013-12-25 | Aplcomp Oy | Voice-based user authentication |
Also Published As
Publication number | Publication date |
---|---|
GB201318876D0 (en) | 2013-12-11 |
GB201320287D0 (en) | 2014-01-01 |
GB2519609A (en) | 2015-04-29 |
WO2015059365A9 (en) | 2015-08-20 |
GB2519609B (en) | 2017-02-15 |
WO2015059365A1 (en) | 2015-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3272101B1 (en) | Audiovisual associative authentication method, related system and device | |
CN109120597B (en) | Identity verification and login method and device and computer equipment | |
EP3256976B1 (en) | Toggling biometric authentication | |
US10200377B1 (en) | Associating a device with a user account | |
CN107800672B (en) | Information verification method, electronic equipment, server and information verification system | |
US7360248B1 (en) | Methods and apparatus for verifying the identity of a user requesting access using location information | |
US9640001B1 (en) | Time-varying representations of user credentials | |
US11128634B1 (en) | System and method for providing a web service using a mobile device capturing dual images | |
US11140171B1 (en) | Establishing and verifying identity using action sequences while protecting user privacy | |
JP6514721B2 (en) | Dual channel identification and authentication | |
US11057372B1 (en) | System and method for authenticating a user to provide a web service | |
CN104540129B (en) | The registering and logging method and system of third-party application | |
US20160173501A1 (en) | Managing electronic account access control | |
US20130254858A1 (en) | Encoding an Authentication Session in a QR Code | |
CN104303483A (en) | User-based identification system for social networks | |
US11757870B1 (en) | Bi-directional voice authentication | |
CN104769914A (en) | Method of processing requests for digital services | |
JP2020038659A (en) | Electronic ticket admission verification anti-counterfeiting system and method thereof | |
CN107018138B (en) | Method and device for determining rights | |
GB2519571A (en) | Audiovisual associative authentication method and related system | |
TW201828162A (en) | Device configuration method, apparatus and system | |
JP2022087815A (en) | System to achieve interoperability through use of interconnected voice verification systems and method and program | |
CN105898002A (en) | Application unlocking method and apparatus for mobile terminal and mobile terminal | |
Zhu et al. | QuickAuth: Two-factor quick authentication based on ambient sound | |
US20190364030A1 (en) | Two-step authentication method, device and corresponding computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |