GB2604102A - Processing input data - Google Patents

Processing input data Download PDF

Info

Publication number
GB2604102A
GB2604102A GB2102251.2A GB202102251A GB2604102A GB 2604102 A GB2604102 A GB 2604102A GB 202102251 A GB202102251 A GB 202102251A GB 2604102 A GB2604102 A GB 2604102A
Authority
GB
United Kingdom
Prior art keywords
data
server
speech
speech data
authorised user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2102251.2A
Other versions
GB202102251D0 (en
Inventor
Matthew Carroll Patrick
Petersen John
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Validsoft Ltd
Original Assignee
Validsoft Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Validsoft Ltd filed Critical Validsoft Ltd
Priority to GB2102251.2A priority Critical patent/GB2604102A/en
Publication of GB202102251D0 publication Critical patent/GB202102251D0/en
Priority to EP22157148.2A priority patent/EP4047496A1/en
Priority to US17/672,986 priority patent/US20220262370A1/en
Publication of GB2604102A publication Critical patent/GB2604102A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1466Active attacks involving interception, injection, modification, spoofing of data unit addresses, e.g. hijacking, packet injection or TCP sequence number attacks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A server receives speech data from a client, extracts structured input data from the speech, determines if the speech corresponds to an authorised user and executes a process based on the extracted structured input data. If the speech does not correspond to an authorised user, then this may indicate that the speech has been altered, e.g. intercepted by a man-in-the-middle (MITM) or man-in-the browser (MITB) attack. Extracting structured input from speech may comprise converting the speech into text and extracting the input from the text. Determining that speech comes from an authorised user may be done by comparing speech to a stored sample of the user. Determination that speech is from an authorised user may be done by a biometric authentication server, that may perform the determination using an artificial intelligence (AI). Extraction of structured input data may be done by a speech recognition server or by speech recognition software. Speech data may be received through a web browser or mobile application request. If speech data does correspond to an authorised user, a notification may be sent to the client that the input data may have been modified.

Description

PROCESSING INPUT DATA
[0001] This invention relates to a method of processing input data by a server, a server for processing input data, and a system for verifying input data. For example, this invention may relate to the detection of corrupted/altered online transactions caused by Man-in-the-Middle (MitM) or Man-in-the-Browser (MitB) attacks.
BACKGROUND
[0002] Man-in-the-Middle (MitM), and Man-in-the-Browser (MitB), are forms of internet threat, or malware, which target online interactions between a user and a service provider. The attacks intercept transmissions of data sent from a user device to a device of a service provider and alter the data, for example by replacing information with new information. The new information is then transmitted to the service provider device, without either the user device or the service provider device being aware. The data most vulnerable to this sort of attack is structured data.
[0003] This form of attack also bypasses any two-factor authentication provided by the service provider by presenting unaltered information to the user while presenting altered information to the service provider; the user therefore authenticates the requested action, unaware that the details of the action have been changed.
[0004] A MitM attack operates through the use of a proxy server between the user and the service provider; the proxy server is arranged to intercept data exchange between the user and the service provider, controlling encryption and decryption of the messages and therefore having the access required to alter the data of the messages. The user is therefore unaware that their data has been changed.
[0005] A MitB attack comprises malware which resides in the browser itself, making changes to the data before encryption takes place, ensuring the service provider sees only the altered data, and then after decryption takes place, ensuring the user sees only the unaltered data.
[0006] Both methods of attack, MitM and MitB, rely on access to structured data formats of the messages, in order to alter appropriate data elements without alerting either the user or the service provider.
[0007] Detecting whether an attack has occurred typically requires the service provider to use a secondary, independent, channel, separate to the primary internet channel of the communications (such as SMS or phone call), to relay the communications as they have received it back to the user for confirmation. A further method is to provide the user with a hardware device, known as a signing token, to re-key certain communication data, in order to generate a value to be sent back to the service provider via the browser, thereby authenticating and protecting the communication.
[0008] The former technique uses Out-of-Band (00B) whilst the latter uses encryption/signing. There are other less popular methods, such as hardened browsers, however these require special software and affect normal browser functionality. Until communication details are confirmed to be correct, the user is unable to authorise communication; the code for which is contained in the SMS or call or is generated by the signing token.
[0009] The drawbacks to 00B is that SMS messages and phone calls can both be intercepted by SIM Swap fraud, or by directly attacking the global telecommunications network protocol, SS7, itself; successful attacks of this nature have been verified. Authorisation codes can be stolen to authorise communications created by untrustworthy third parties, rather than genuine users. The calls and messages also cost the service providers money and resources.
[0010] The drawback to signing tokens is that they are expensive, the rekeying is error-prone, they are inconvenient and they are limited to a single service provider (they cannot be shared).
[0011] The invention addresses how to securely authenticate a user and thereby detect any alteration of transaction detail or injection of transactions, without having to use secondary, 00B, channels or any form of signing devices.
BRIEF SUMMARY OF THE DISCLOSURE
[0012] It is an aim of certain embodiments of the invention to solve, mitigate or obviate, at least partly, at least one of the problems and/or disadvantages associated with the prior art.
Certain embodiments aim to provide at least one of the advantages described below.
[0013] According to a first aspect, there is provided a method of processing input data by a server, the method comprising: receiving speech data from a client device; extracting structured input data from the speech data; determining if the speech data corresponds to an authorised user; and if the speech data corresponds to an authorised user, executing a process based on the extracted structured input data.
[0014] In an embodiment, determining that the speech data does not correspond to an authorised user indicates that the speech data may have been altered.
[0015] In an embodiment, extracting structured input data from the speech data comprises: converting the speech data into text data; and extracting the structured input data from the text data.
[0016] In an embodiment, determining if the speech data corresponds to an authorised user comprises: comparing the speech data to a prestored sample of the authorised user; confirming whether the speech data matches the prestored sample of the authorised user; and if the speech data matches the prestored sample of the authorised user, determining that the speech data corresponds to the authorised user.
[0017] In an embodiment, the determining if the speech data corresponds to the authorised user is performed by a biometric authentication server.
[0018] In an embodiment, the biometric authentication server performs the determining if the speech data corresponds to the authorised user using Artificial Intelligence, Al, software.
[0019] In an embodiment, extracting the structured input data from the speech data is performed by a speech recognition server.
[0020] In an embodiment, the extracting the structured input data from the speech data is performed using speech recognition software.
[0021] In an embodiment, the speech data is received in a web browser request. A web browser request is a request originating from a web browser. The request may be a HTTP request, a HTTPS request, a Web Socket request, a SIP request, a WebRTC request or a request according to some other protocol used to send data from a web browser to a web server.
[0022] In an embodiment, the speech data is received in a mobile application request. A mobile application request is a request originating from a mobile application. The request may be a HTTP request, a HTTPS request, a FTP request, a Web Socket request, a SIP request, a WebRTC request or a request according to some other protocol used to send data from a mobile application to a server.
[0023] In an embodiment, the speech data is received in a request using an Internet protocol for transmitting data between an application running on a client device and a server, over TCP/IP connections.
[0024] In an embodiment, the method further comprises, if the speech data does not correspond to an authorised user, sending, to the client device, a notification that the input data may have been altered and that the process has not been executed.
[0025] According to a second aspect, there is provided a server for processing input data, the server configured to: receive speech data from a client device; extract structured input data from the speech data; determine if the speech data corresponds to an authorised user; and if the speech data corresponds to an authorised user, execute a process based on the extracted structured input data.
[0026] In an embodiment, determining that the speech data does not correspond to an authorised user indicates that the speech data may have been altered.
[0027] In an embodiment, extracting structured input data from the speech data comprises: converting the speech data into text data; and extracting the structured input data from the text data.
[0028] In an embodiment, determining if the speech data corresponds to an authorised user comprises: comparing the speech data to a prestored sample of the authorised user; confirming whether the speech data matches the prestored sample of the authorised user; and if the speech data matches the prestored sample of the authorised user, determining that the speech data corresponds to the authorised user.
[0029] In an embodiment, determining if the speech data corresponds to the authorised user further comprises: transmitting the speech data to a biometric authentication server which performs the determining; and receiving, from the biometric authentication server, an indication of whether the speech data matches the prestored sample of the authorised user.
[0030] In an embodiment, the determining is performed using Artificial Intelligence, Al, software.
[0031] In an embodiment, extracting the structured input data from the speech data further comprises: transmitting the speech data to a speech recognition server which performs the extracting; and receiving, from the speech recognition server, the text data.
[0032] In an embodiment, extracting the structured input data from the speech data is performed using speech recognition software.
[0033] In an embodiment, the speech data is received in a web browser request or a mobile application request.
[0034] In an embodiment, the method further comprises, if the speech data does not correspond to an authorised user, sending, to the client device, a notification that the input data may have been altered and that the process has not been executed.
[0035] According to a third aspect, there is provided a system for verifying input data, the system comprising the server and a client device configured to: receive an input of speech data; and transmit the speech data to the server.
[0036] Another aspect of the invention provides a computer program comprising instructions arranged, when executed, to implement a method in accordance with any one of the above-described aspects. A further aspect provides machine-readable storage storing such a program.
[0037] Another aspect comprises a carrier medium comprising computer readable code configured to cause a computer to perform the above methods. Since some methods in accordance with embodiments can be implemented by software, some embodiments encompass computer code provided on any suitable carrier medium.
The carrier medium can comprise any storage medium such as a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal. The carrier medium may comprise a non-transitory computer readable storage medium.
BRIEF DESCRIPTION OF THE DRAWINGS
[0038] Embodiments of the invention are further described hereinafter with reference to the accompanying drawings, in which: Figure 1 is a diagram showing an example of a Man in the Browser (MitB) plugin altering structured data transmitted by the client device; Figure 2 is a diagram showing an example of a Man in the Middle (MitM) proxy altering structured data transmitted by a client device; Figure 3 is a diagram showing an example of a client device sending unstructured data to a server; Figure 4 is a diagram showing an example of a client device sending unstructured data to a server, which is then transmitted to and processed by a speech recognition server; Figure 5 is a diagram showing an example of a client device sending unstructured data to a server, which is then transmitted to and processed by a speech recognition server and a biometric authentication server; Figure 6 is a flowchart showing an example of a method carried out by a server upon receiving unstructured data from a client device; and Figure 7 is a diagram showing an example of a method carried out by a server, a biometric authentication server, and a speech recognition server upon the server receiving unstructured data from a client device.
DETAILED DESCRIPTION
[0039] A primary weakness of browser technology is the ability for malware to be introduced by browser plugins and extensions, which are also an essential, required component of how browsers work. Due to the underlying data being structured, the data can be detected and altered by malware because the malware only requires understanding of the structure of the data, which can be obtained easily.
[0040] Modern web browsers also support multi-media, including support for speech; a user is able to transmit voice commands through their browser. Various websites are configured to accept audio input. An example is described here based on an Internet banking website, which may operate in the following manner to process a payment. The user speaks the destination account number and amount of the payment when requested by the website..
The web browser running on the client device includes this audio signal in a web browser request, for example a HTTPS request, and transmits the request to a server device hosting the website. The server processes the audio signal using automatic speech recognition (ASR) for example, and converts the audio signal into text for example. The server then processes the payment transaction according to the information in the text (i.e. the account number and amount). In this manner, the user is able to speak information directly to the web browser instead of using keyboard inputs or the like.
[0041] Various mobile applications also support multi-media, including support for speech; a user is able to transmit voice commands through the mobile application. An example may be a mobile banking app, which may operate in the following manner to process a payment. The user speaks the destination account number and amount of the payment when requested. The banking app running on the client device includes this audio signal in a request, for example a HTTPS request, and transmits the request to a server device hosting the mobile banking application. The server processes the audio signal using automatic speech recognition (ASR) for example, and converts the audio signal into text for example. The server then processes the payment transaction according to the information in the text (i.e. the account number and amount).
[0042] The invention enables users to speak required information directly to the browser for example instead of using keyboard inputs or the like. The resultant data of speech input from the user is unstructured, in contrast to the structured alphanumeric data which results from a keyboard input. Unstructured audio is much harder to alter in general and cannot be altered in as convincing a way as a structured data element can be altered.
[0043] It is possible that audio could theoretically be injected into the transmission of the audio by a MitM or MitB, containing instructions contrary to those intended by the user. However, by using continuous voice biometrics, the invention may authenticate the audio instructions as having been spoken by a genuine user. The MitM or MitB has no way of overcoming this; any recording other than that made by the user could not be used without it being detected and failing the authentication.
[0044] The unstructured audio, once received by the service provider, may be deciphered by speech recognition software, or Artificial Intelligence (Al) software, to extract the required information and put it into a structured format.
[0045] Figures 1 and 2 show the process of malware intercepting and altering structured data of a user in more detail.
[0046] Figure 1 is a diagram showing an example of a user 120 of a client device 110 requesting a server 140 to perform a process by inputting structured data 150 to be transmitted to the server 140 while unaware that their browser is host to a Man in the Browser (MitB) plugin 100. The MitB plugin 100 intercepts the structured data 150 and alters the data.
[0047] This altered structured data 130 is then transmitted from the client device 110 to the server 140 without the user 120 becoming aware that their data has been altered. The structured data 150 may be numerical data, text data, or the like.
[0048] The client device 110 may be a mobile phone or personal computer or the like. The MitB plugin 100 may not be a plugin but may instead be an unauthorised browser extension, patch, update, DOM object or the like.
[0049] The server 140 may then receive the altered structured data 130, accept the request, and perform a process based on the altered structured data 130.
[0050] The server 140 may transmit a message to the client device 110 in return, which may in turn be intercepted by the MitB plugin 100. The MitB plugin 100 may then alter the data received from the server 140 such that the client device 110 receives data that corresponds to the unaltered structured data that was previously intercepted by the MitB plugin 100. [0051] The message transmitted from the server 140 to the client device 110 may be structured data 150. The structured data 150 may contain details of the process that was carried out, may be a confirmation of receipt of the structured data 150 of the client device 110, or the like.
[0052] The MitB plugin 100 may intercept the structured data 150 before encryption of the data may take place at the client device 110. The altered structured data 130 may then be encrypted by the client device 110, which remains unaware that the data has been altered, and may then be transmitted to the server 140. The server 140 may then transmit encrypted data back to the client device 110 which may decrypt the encrypted data. This decrypted data may be intercepted and altered by the MitB plugin 100 before the user 120 of the client device 110 gains access to the data such that the data corresponds to the original structured data 150 input by the user 120 of the client device 110.
[0053] Figure 2 is a diagram showing an example of a user 210 of a client device 200 requesting a server 250 to perform a process by inputting structured data 220 to be transmitted to a server 250 which is intercepted by a Man in the Middle (MitM) proxy server 230 before it reaches the server 250. The MitM proxy server 230 intercepts and alters the structured data 220 before transmitting the altered structured data 240 to the server 250. The alteration takes place without the user becoming aware that their data has been intercepted.
[0054] The server 250 may then receive the altered structured data 240 and perform the requested process based on the altered structured data 240.
[0055] The server 250 may transmit a message to the client device 200 in return, which may in turn be intercepted by the MitM proxy server 230. The MitM proxy server 230 may then alter the data received from the server 250 such that the client device 200 receives data that corresponds to the unaltered structured data 220 that was previously intercepted by the MitM proxy server 230.
[0056] The message transmitted from the server 250 to the client device 200 may be structured data 220. The structured data 220 may contain details of the process that was carried out, or may be a confirmation of receipt of the structured data 220 of the client device 200, or the like.
[0057] Embodiments of the present invention will now be described in the context of a server, or group of servers, configured to receive an unstructured data input from a client device and process it in order to determine whether the user who input the unstructured data is an authorised user as well as to extract structured data from the unstructured data in order to, if the user is the authorised user, execute a process requested by the user.
[0058] Figure 3 is a diagram showing an example of a user 310 of a client device 300 performing an input 320 of unstructured data 330 in order to request a server 340 to perform a process. The client device 300 may be a mobile phone or personal computer or the like. The client device 300 comprises a microphone (not shown). The microphone may be built into the client device 300 or coupled to the client device 300 by a wired or non-wired connection.
[0059] In contrast to the examples described with regard to Figures 1 and 2, the data which is input by the user 310 in Figure 3 is unstructured data rather than structured data. As such, although a MitB plugin or MitM proxy server may attempt to alter the unstructured data it may not be able to successfully alter the data such that it is not obvious that it has been altered. The altered unstructured data may be transmitted to the server 340 which may analyse the altered unstructured data, determine that the data has been altered and reject the request.
[0060] The unstructured data may be speech data of the voice of the user 210, an image of a handwriting input of the user 210, multimedia data of video of the user 210, or the like.
[0061] For example, if the unstructured data were audio data of a speech input, the MitM 230 may insert computer-generated speech to replace part of the audio data which contains the information that the MitM 230 wishes to alter. Alternatively, the speech input may be altered via impersonation or the like.
[0062] A web browser runs on the client device 300. The web browser is a software application installed on the client device 300. The user 310 initiates the web browser application. The web browser application includes support for speech input for web pages programmed to receive speech input Various websites are configured to accept audio input. An example is described here relating to an Internet banking website. The user accesses the Internet banking website through the web browser. In this example, the user speaks the destination account number and amount of the payment when requested by the Internet banking website. In other words, the user 310 speaks, and an audio signal is detected by the microphone. The audio signal detected by the microphone is then processed by the web browser application and sent to the server 340. For example, the web browser application includes the audio signal in a HTTP request, and sends the HTTP request to the server 340.
Although a HTTP request is described here as an example, the audio signal may be transmitted over HTTPS, Web Socket, Session Initiation Protocol (SIP), WebRTC or some other web channel protocol.
[0063] The user 310 speaks the user input 330. The user input 330 may include an account number or payment amount relating to an online transaction for example. In this step, the user 310 inputs the unstructured data 330 which is received by the client device 300 and transmitted, by the client device 300, to the server 340. In this example, the unstructured data 330 is speech data, in other words an audio signal. Alternatively however, the unstructured data 330 may be a different sort of unstructured data such as an input, a video input, or the like. The client device 300 may transmit a file comprising the speech data. In this example, the client device 300 transmits a web browser request comprising the speech data. The client device 300 may transmit a transaction comprising the speech data. The speech data may relate to details of an online transaction, such as an online payment for example. In more detail, the web browser application running on the client device 300 includes the speech data, or audio signal, in a HTTP request and transmits the HTTP request including the speech data to the server 340. As explained above, the speech data may be transmitted in an HTTP request, HTTPS request, WebSocket request, SIP request, WebRTC request or a request according to some other protocol used to send data from a web browser to a server. The speech data is transmitted from the client device 300 and received at the server 340 through a web channel.
[0064] The unstructured data 330 is received at the server 340 in the request originating from the web browser. The server 340 determines whether the unstructured data 330 corresponds to an authorised user. To perform the determination, the server 340 may compare the unstructured data 330 to a prestored sample of unstructured data corresponding to the authorised user. Alternatively, any other method of authenticating the user may also be used. The sample may be input by the authorised user upon registration with a service provided by the server 340. The sample may be stored in a database of the server 340 or may be stored in an external server. The server 340 may use Al software to confirm whether the unstructured data matches the sample of the authorised user. Alternatively, for example, if the unstructured data 330 is speech data, the server 340 may use continuous voice biometrics to confirm whether the speech data matches the sample of the authorised data.
[0065] The server 340 determines whether the unstructured data 330 corresponds to an authorised user. To perform the determination, the server 340 may compare the unstructured data 330 to a prestored sample of unstructured data corresponding to the authorised user, to determine whether the unstructured data matches the sample of the authorised user. In particular, it is determined whether the unstructured data corresponds to the same voice as the prestored sample. This may comprise using continuous voice biometrics to confirm whether the speech data corresponds to the same voice as the prestored sample or using Al software to confirm whether the speech data corresponds to the same voice as the prestored sample. For example, the unstructured data 330 is converted into a biometric model or template and compared with a pre-stored biometric model or template.
[0066] The server 340 may use continuous voice biometrics to confirm whether the entire speech data matches the sample of the authorised data. If the entire speech data matches the sample, then the server 340 determines that the user 310 corresponds to the authorised user. Determining that any part of the speech data does not match the sample is an indication that the unstructured data has been altered. For example, a portion of the unstructured data has been injected, added, modified, or replaced. For example, an alteration of a transaction detail or injection of transactions may have been made.
[0067] Alternatively, the server 340 may use continuous voice biometrics to confirm whether the entirety of a designated portion of the speech data matches the sample for example. If the entirety of a designated portion of the speech data matches the sample, then the server 340 determines that the user 310 corresponds to the authorised user. The server 340 determining that any part of the designated portion of the speech data does not match the sample is an indication that the unstructured data has been altered.
[0068] The designated portion of the speech data may be predefined such that the designated portion corresponds to speech data between predetermined time instants.
Alternatively, the designated portion corresponds to speech data corresponding to predetermined structured data. For example, the designated portion may correspond to text data that relates to sensitive aspects of a transaction (for example account number or payment amount).
[0069] The server 340 also extracts structured input data from the unstructured data. For example, if the unstructured data is speech data, the server 340 may convert the speech data into text data, from which the structured input data can be extracted; it may be that the server 340 extracts information from the text data and then converts it into the structured input data. The structured input data may include instructions to be carried out by the server 340, numerical data, or the like. The server 340 processes the audio signal using automatic speech recognition (ASR) for example, and converts the audio signal into text for example. [0070] If it is determined by the server 340 that the user 310 corresponds to the authorised user, the server 340 executes the requested process based on the content of the structured input data. In other words, the server 340 processes the payment transaction according to the information in the text (i.e. the account number and amount). As explained above, unstructured audio is much harder to alter in general and cannot be altered in as convincing a way as a structured data element, such as text, can be altered. In this method, the transaction requested and the identity of the speaker is encoded in the same speech data. Structured data corresponding to the transaction request is extracted from the speech data. It is also determined whether the speech data corresponds to the authorised user. If the speech data does not correspond to the authorised user, this is detected. In this example, the critical information of the account number and payment amount were received as audio from the user, rather than being keyed in. This information is therefore transmitted as unstructured data. An ASR process performed at the server 340 extracts the critical information and puts it into a structured format for processing. By inputting and transmitting the transaction data as audio, voice biometrics can be used to make the transaction tamper-evident.
[0071] Additionally, if it has been determined that the user 310 is the authorised user and the process based on the content of the structured input data has been executed, the server 340 may transmit a message to the client device 300. The message may contain a confirmation of the identity of the user 310 and may also contain details of the process which was executed.
[0072] Additionally, if it has been determined that the user 310 is not the authorised user, the process based on the content of the structured input data is not executed. The server 340 may also transmit a message to the client device 300. The message may notify that the process, for example the transaction, has not been executed. The message may further notify that a potentially fraudulent transaction has been attempted. The message may notify that the input data may have been altered and that the process has not been executed. [0073] Although in the above described example, an Internet banking website is described, it is understood that various websites and web pages across various sectors are configured to accept audio input. Furthermore, although in the above described example, a web browser runs on the client device 300 and transmits the speech data in a web server request to the server 340, alternatively a mobile application may be running on the client device 300 and transmit the speech data in a mobile application request to the server 340 for example. The mobile application is a software application designed to run on a mobile device, which is installed on the client device 300. The mobile application has Internet connectivity. The user 310 initiates the mobile application. The mobile application includes support for speech input. An audio signal detected by the microphone is processed by the mobile application and sent to the server 340. For example, the mobile application includes the audio signal in a HTTP request, and sends the HTTP request to the server 340. Although a HTTP request is described here as an example, the audio signal may be transmitted over HTTPS, FTP, Web Socket, Session Initiation Protocol (SIP), WebRTC or some other web channel protocol. The unstructured data 330 is received at the server 340 in the request originating from the mobile application.
[0074] The speech data may be received in a request using an Internet protocol for transmitting data between an application running on the client device and a server over TCP/IP connections. The application may be a web browser, mobile application or any other application with Internet connectivity for example.
[0075] Figure 4 shows a diagram of an example of a user 410 of a client device 400 performing an input of unstructured data 430 in order to request a server 440 to perform a process.
[0076] The user 410 inputs the unstructured data 430 which is received by the client device 400 and is transmitted, by the client device 400, to the server 440. The unstructured data 430 may, for example, be speech data which is generated from a user's voice input.
Alternatively, the unstructured data 430 may be a handwritten input, a video input, or any other kind of unstructured data input. The unstructured data 430 may be contained in a file. The unstructured data 430 may be contained in web browser request or a mobile application request as described above for example. The unstructured data 430 may be contained in a transaction. The unstructured data 430 may relate to details of an online transaction, for
example.
[0077] The unstructured data 430 is transmitted by the server 440 to a speech recognition server 470. The speech recognition server may be a server or any other electronic device with the capability to process the unstructured data. Alternatively, the server 440 may transmit the unstructured data 430 to a biometric authentication server. The biometric authentication server may be a server or any other electronic device with the capability to process the unstructured data. Alternatively, the server 440 may not transmit the unstructured data 430 to any external device, as described with reference to Figure 3. [0078] The speech recognition server 470 extracts structured input data from the unstructured data. To perform the extraction, for example, the speech recognition server 470 may convert speech data into text data, from which the structured input data can be extracted; it may be that the speech recognition server 470 extracts information from the text data and then converts it into the structured input data. The speech recognition server 470 may convert the speech data into text data using Artificial Intelligence (Al) software or may use speech recognition software. The structured input data may, for example, include instructions to be carried out by the server 440, numerical data, or the like.
[0079] The structured input data is transmitted by the speech recognition server 470 to the server 440.
[0080] The unstructured data 430 is also analysed by the server 440 in order to determine whether the user 410 who input the unstructured data 430 corresponds to an authorised user. For example, the unstructured data 430 may be speech data, in which case the speech data may be compared with a prestored sample of speech data of the authorised user. The sample may be recorded by the user upon registration with a service provided by the server 440. The sample may be stored in a database of the server 440 or may be stored in an external server. The server 440 may use continuous voice biometrics to confirm whether the speech data matches the sample of the authorised user.
[0081] The server 440 determines whether the unstructured data 430 corresponds to the authorised user. If the unstructured data 430 does not correspond to any authorised user, this may indicate that the unstructured data 430 has been tampered with by an unauthorised third party and the server 440 may refuse the request of the user 410 to perform the process. The server 440 may also transmit a message to the client device 400 of the user 410 indicating that the request has been declined. The message may also indicate to the user 410 of the client device 400 that the unstructured data 430 has been tampered with. [0082] If it is determined by the server 440 that the user 410 corresponds to the authorised user, the server 440 executes the requested process based on the extracted content of the structured input data.
[0083] Additionally, if it has been determined that the user 410 is the authorised user and the process based on the content of the structured input data has been executed, the server 440 may transmit a message to the client device 400. The message may contain a confirmation of the identity of the user 410 and may also contain details of the process which was executed.
[0084] Figure 5 shows a diagram of an example of a user 510 of a client device 500 performing an input of speech data 530 in order to request a server 540 to perform a process. [0085] The user 510 inputs the speech data 530 which is received by the client device 500 and transmitted, by the client device 500, to the server 540. Alternatively, the speech data 530 may be a handwritten input, a video input or any other input of unstructured data by a user. The unstructured data 430 may be contained in a file. The unstructured data 530 may be contained in web browser request or a mobile application request as described above for example. The unstructured data 430 may be contained in a transaction. The unstructured data 430 may relate to details of an online transaction, for example.
[0086] The speech data 530 is transmitted by the server 540 to a speech recognition server 580 and a Biometric Authentication Server 590. Alternatively, the server 540 may perform all of the processing without designating any tasks to external devices; as described with reference to Figure 3.
[0087] The speech processing server 580 extracts structured input data from the speech data 530. To perform the extraction, the speech recognition server 580 may convert the speech data 530 into text data, from which the structured input data 550 can be extracted; it may be, for example, that the speech recognition server 580 extracts information from the text data and then converts the extracted information into the structured input data 550. The speech recognition server 580 may convert the speech data 530 into text data using Artificial Intelligence (Al) software or may use speech recognition software. The speech processing server 580 then transmits the structured input data 550 to the server 540.
[0088] Upon receiving the speech data 530, the biometric authentication server 590 analyses the speech data 530 to determine whether the user 510 who input the unstructured data 530 corresponds to an authorised user. The Biometric authentication server 590 may compare the speech data to a prestored sample of the authorised data. The sample may be recorded by the user upon registration with a service provided by the server 540. The sample may be stored in a database of the server 540 or may be stored in an external server.
[0089] The biometric authentication server 590 engine determines whether the speech data matches the prestored sample of the authorised user. The Biometric authentication server 590 may use continuous voice biometrics to confirm whether the speech data matches the sample of the authorised user. The server 590 may use continuous voice biometrics to confirm whether the entire speech data or, alternatively, the entirety of a designated portion of the speech data, matches the sample of the authorised user, as described with reference to Figure 3.
[0090] The biometric authentication server engine 590 transmits an indication 570 of whether the user 510 has been determined to be the authorised user to the server 540. The indication 570 may confirm that the user 510 is the authorised user or may indicate that the speech sample does not match any samples of authorised user, indicating that the unstructured speech data has been altered by an unauthorised third party. If the indication 570 indicates that any part of the speech sample or a designated portion of the speech sample does not match any samples of authorised users, the server 540 may refuse the request of the user 510 to perform the process and may transmit a message to the client device 500 of the user indicating that the request has been declined. The message may also indicate to the user of the client device that it is likely that the speech data has been tampered with.
[0091] If it is determined by the biometric authentication server engine 590 that the user 510 corresponds to the authorised user, the server 540 executes the requested process based on the extracted content of the structured input data.
[0092] Additionally, if it has been determined that the user 510 is the authorised user and the process based on the content of the structured input data has been executed, the server 540 may also transmit a message to the client device 500. The message may contain a confirmation of the identity of the user 510 and may also contain details of the process which was executed.
[0093] Figure 6 shows a flowchart showing a method for verifying input data performed by a server.
[0094] In step 3600, the server receives speech data from a client device. The speech data is received by the client device; the user inputs a voice input which is transmitted as speech data by the client device to the server. Alternatively, the speech data may be a different sort of unstructured data, such as a handwritten input, video input, or the like. The unstructured data 430 may be contained in a file. The speech data is received in a request originating from a web browser or a mobile application for example. For example, the speech data may be contained in a HTTP request, a HTTPS request, an FTP request, a WebSocket request, a SIP request, a WebRTC request or some other message format used to transmit data between a web browser application and a server or between a mobile application and a server. The unstructured data 430 may be contained in a transaction. The unstructured data 430 may relate to details of an online transaction, for example.
[0095] In step 3620, the server extracts structured input data from the speech data. This may be performed by the server, through the use of speech recognition software which may convert the speech data into text data. The server may convert the speech data into text data using speech recognition software. The server may then extract the structured input data from the text data by extracting information from the text data and converting it into the structured input data. Alternatively, the server may extract the structured input data through the use of Artificial Intelligence (Al) software.
[0096] Alternatively, the server may designate this step of the method to a separate server, such as a speech recognition server. The server may then transmit the speech data to the speech recognition server, which may perform step 3620, and may subsequently transmit the structured input data to the server.
[0097] In step 3640, the server determines, based on the speech data, whether the user who input the speech data corresponds to an authorised user. The authorised user may be a user that has previously used the service or may be a user that is registered in a database.
The determination may comprise a comparison of the speech data with prestored speech data attributed to the authorised user; the input speech data may be compared with the prestored speech data of the authorised user that user is claiming to be.
[0098] In step 3660, if it is determined that the user is the authorised user, the server executes a process based on the structured input data extracted from the text data.
[0099] Alternatively, the server may transmit the speech data to a Biometric authentication server which may determine whether the user is the authorised user, as described with reference to Figure 4. Alternatively, the server may transmit the speech data to the Biometric authentication server as well as a speech recognition server, wherein the Biometric authentication server may determine whether the user is the authorised user and the speech recognition engine may extract the structured input data from the speech data, as described with reference to Figure 5.
[00100] Additionally, if it has been determined that the user is the authorised user and the process based on the content of the extracted structured input data has been executed, the server may transmit a message to the client device. The message may contain a confirmation of the identity of the user and may also contain details of the process which was executed.
[00101] Figure 7 shows a diagram of an example of the interactions between a server 750, a biometric authentication server 760 and a speech recognition server 770 while performing a method of verifying input data.
[00102] In step 3700, the server 750 receives speech data from a client device. The speech data is received by the client device; the user inputs a voice input which is transmitted as speech data by the client device to the server 750. Alternatively, the speech data may be a different sort of unstructured data, such as a handwritten input, a video input, or the like. The unstructured data 430 may be contained in a file. The speech is received in a request originating from a web browser or a mobile application for example. For example, the speech data may be contained in a HTTP request, a HTTPS request, an FTP request, a WebSocket request, a SIP request, a WebRTC request or some other message format used to transmit data between a web browser application and a server or between a mobile application and a server. The unstructured data 430 may be contained in a transaction. The unstructured data 430 may relate to details of an online transaction, for example.
[00103] In step 3705, the server 750 transmits the speech data to the Biometric authentication server 760 and the speech recognition server 770. Alternatively, the server 750 may transmit the speech data to only one external server which may carry out the duties of both the biometric authentication server 760 and the speech recognition server 770. Alternatively, the server 750 transmit the speech data to only either the biometric authentication server 760 or the speech recognition server 770 and the server 750 may then carry out the duties of either the speech recognition server 770 or the biometric authentication server 760.
[00104] In step 710, the biometric authentication server compares the speech data to a prestored sample of an authorised user. The sample may be recorded by the user upon registration with a service provided by the server 750. The sample may be stored in a database of the server 750 or the Siometric authentication server or may be stored in an external server.
[00105] In step S715, the biometric authentication server 760 determines whether the speech data matches the prestored sample of the authorised user. The biometric authentication server 760 may use continuous voice biometrics to confirm whether the speech data matches the sample of the authorised user. The server 760 may use continuous voice biometrics to confirm whether the entire speech data or, alternatively, the entirety of a designated portion of the speech data, matches the sample of the authorised user, as described with reference to Figure 3.
[00106] In step S720, the speech recognition server 770 extracts structured input data from the speech data. To perform the extraction, the speech recognition server 770 may convert the speech data into text data, from which the structured input data can be extracted; it may be that the speech recognition server 770 extracts information from the text data and then converts it into the structured input data. The speech recognition server 770 may convert the speech data into text data using Artificial Intelligence (Al) software or may use speech recognition software.
[00107] In step S725, the speech recognition server 770 transmits the structured input data to the server 750.
[00108] In step S730, the biometric authentication server 760 transmits an indication of whether the user has been determined to be the authorised user to the server 750. The indication may confirm that the user is the authorised user or may indicate that the speech sample does not match any samples of authorised users, indicating that the unstructured speech data has been altered by an unauthorised third party. If the indication indicates that the speech sample does not match any samples of authorised users, the server 750 may refuse the request of the user to perform the process and may transmit a message to the client device of the user indicating that the request has been declined. The message may also indicate to the user of the client device that it is likely that the speech data has been tampered with.
[00109] In step S735, if it has been determined that the user is the authorised user, the server 750 executes a process based on the extracted structured input data.
[00110] Additionally, if it has been determined that the user is the authorised user and the process based on the content of the extracted input data has been executed, the server 750 may transmit a message to the client device. The message may contain a confirmation of the identity of the user and may also contain details of the process which was executed. [00111] It will be appreciated that embodiments of the present invention can be realized in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage, for example a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory, for example RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium, for example a CD, DVD, magnetic disk or magnetic tape or the like. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs comprising instructions that, when executed, implement embodiments of the present invention.
[00112] Accordingly, embodiments provide a program comprising code for implementing apparatus or a method as claimed in any one of the claims of this specification and a machine-readable storage storing such a program. Still further, such programs may be conveyed electronically via any medium, for example a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
[00113] Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of them mean "including but not limited to", and they are not intended to (and do not) exclude other components, integers or steps. Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
[00114] Features, integers or characteristics described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. It will be also be appreciated that, throughout the description and claims of this specification, language in the general form of "X for Y" (where Y is some action, activity or step and X is some means for carrying out that action, activity or step) encompasses means X adapted or arranged specifically, but not exclusively, to do Y. [00115] The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.

Claims (22)

  1. CLAIMS: 1. A method of processing input data by a server, the method comprising: receiving speech data from a client device; extracting structured input data from the speech data; determining if the speech data corresponds to an authorised user; and if the speech data corresponds to an authorised user, executing a process based on the extracted structured input data.
  2. 2. The method of claim 1, wherein determining that the speech data does not correspond to an authorised user indicates that the speech data may have been altered.
  3. 3. The method of claim 1 or claim 2, wherein extracting structured input data from the speech data comprises: converting the speech data into text data; and extracting the structured input data from the text data.
  4. 4. The method of any preceding claim, wherein determining if the speech data corresponds to an authorised user comprises: comparing the speech data to a prestored sample of the authorised user; confirming whether the speech data matches the prestored sample of the authorised user; and if the speech data matches the prestored sample of the authorised user, determining that the speech data corresponds to the authorised user.
  5. 5. The method of claim 4, wherein the determining if the speech data corresponds to the authorised user is performed by a biometric authentication server.
  6. 6. The method of claim 5, wherein the biometric authentication server performs the determining if the speech data corresponds to the authorised user using Artificial Intelligence, Al, software.
  7. 7. The method of any one of claims 3 to 6, wherein extracting the structured input data from the speech data is performed by a speech recognition server.
  8. 8. The method of any one of claims 3 to 6, wherein the extracting the structured input data from the speech data is performed using speech recognition software.
  9. 9. The method according to any preceding claim, wherein the speech data is received in a web browser request or a mobile application request.
  10. 10. The method according to any preceding claim, further comprising, if the speech data does not correspond to an authorised user, sending, to the client device, a notification that the input data may have been altered and that the process has not been executed.
  11. 11. A server for processing input data, the server configured to: receive speech data from a client device; extract structured input data from the speech data; determine if the speech data corresponds to an authorised user; and if the speech data corresponds to an authorised user, execute a process based on the extracted structured input data.
  12. 12. The server of claim 11, wherein determining that the speech data does not correspond to an authorised user indicates that the speech data may have been altered.
  13. 13. The server of claim 11 or claim 12, wherein extracting structured input data from the speech data comprises: converting the speech data into text data; and extracting the structured input data from the text data.
  14. 14. The server of any of claims 11 to 13, wherein determining if the speech data corresponds to an authorised user comprises: comparing the speech data to a prestored sample of the authorised user; confirming whether the speech data matches the prestored sample of the authorised user; and if the speech data matches the prestored sample of the authorised user, determining that the speech data corresponds to the authorised user.
  15. 15. The server of claim 14, wherein determining if the speech data corresponds to the authorised user further comprises: transmitting the speech data to a biometric authentication server which performs the determining; and receiving, from the biometric authentication server, an indication of whether the speech data matches the prestored sample of the authorised user.
  16. 16. The server of claim 14 or claim 15, wherein the determining is performed using Artificial Intelligence, Al, software.
  17. 17. The server of any one of claims 13 to 16, wherein extracting the structured input data from the speech data further comprises: transmitting the speech data to a speech recognition server which performs the extracting; and receiving, from the speech recognition server, the text data.
  18. 18. The server of any one of claims 13 to 16, wherein extracting the structured input data from the speech data is performed using speech recognition software.
  19. 19. The server according to any of claims 11 to 18, wherein the speech data is received in a web browser request or a mobile application request.
  20. 20. The method according to any preceding claim, further comprising, if the speech data does not correspond to an authorised user, sending, to the client device, a notification that the input data may have been altered and that the process has not been executed.
  21. 21. A system for verifying input data, the system comprising the server according to any of claims 11 to 20 and a client device configured to: receive an input of speech data; and transmit the speech data to the server.
  22. 22. A carrier medium comprising computer readable code configured to cause a computer to perform the method of claim 17 or 19
GB2102251.2A 2021-02-17 2021-02-17 Processing input data Pending GB2604102A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2102251.2A GB2604102A (en) 2021-02-17 2021-02-17 Processing input data
EP22157148.2A EP4047496A1 (en) 2021-02-17 2022-02-16 Processing input data
US17/672,986 US20220262370A1 (en) 2021-02-17 2022-02-16 Processing Input Data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2102251.2A GB2604102A (en) 2021-02-17 2021-02-17 Processing input data

Publications (2)

Publication Number Publication Date
GB202102251D0 GB202102251D0 (en) 2021-03-31
GB2604102A true GB2604102A (en) 2022-08-31

Family

ID=75338837

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2102251.2A Pending GB2604102A (en) 2021-02-17 2021-02-17 Processing input data

Country Status (1)

Country Link
GB (1) GB2604102A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040074810A (en) * 2003-02-19 2004-08-26 (주) 자모바 씨.엘.에스 Method of control the browser and link the internet servises automatically on the client's computer by speech recognition, speaker verification and fingerprint identification
US8959360B1 (en) * 2012-09-10 2015-02-17 Google Inc. Voice authentication and command
CN108831470A (en) * 2018-08-24 2018-11-16 深圳伊讯科技有限公司 A kind of method and system by voice control BMS
US20190197228A1 (en) * 2017-01-25 2019-06-27 Ca, Inc. Secure biometric authentication with client-side feature extraction
EP3699908A1 (en) * 2011-03-21 2020-08-26 Apple Inc. Device access using voice authentication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040074810A (en) * 2003-02-19 2004-08-26 (주) 자모바 씨.엘.에스 Method of control the browser and link the internet servises automatically on the client's computer by speech recognition, speaker verification and fingerprint identification
EP3699908A1 (en) * 2011-03-21 2020-08-26 Apple Inc. Device access using voice authentication
US8959360B1 (en) * 2012-09-10 2015-02-17 Google Inc. Voice authentication and command
US20190197228A1 (en) * 2017-01-25 2019-06-27 Ca, Inc. Secure biometric authentication with client-side feature extraction
CN108831470A (en) * 2018-08-24 2018-11-16 深圳伊讯科技有限公司 A kind of method and system by voice control BMS

Also Published As

Publication number Publication date
GB202102251D0 (en) 2021-03-31

Similar Documents

Publication Publication Date Title
EP2929479B1 (en) Method and apparatus of account login
CN111431719A (en) Mobile terminal password protection module, mobile terminal and password protection method
US20180130056A1 (en) Method and system for transaction security
US20240040384A1 (en) Techniques for call authentication
US20230291758A1 (en) Malware Detection Using Document Object Model Inspection
US20210120417A1 (en) Systems and methods for securing communication between a native application and an embedded hybrid component on an electronic device
CN114218561A (en) Weak password detection method, terminal equipment and storage medium
CN111356132B (en) Bluetooth access control method, system, electronic equipment and storage medium
GB2449240A (en) Conducting secure online transactions using CAPTCHA
GB2604102A (en) Processing input data
US20220262370A1 (en) Processing Input Data
Pevný et al. Malicons: Detecting payload in favicons
CN108768973B (en) Trusted application operation request auditing method and trusted application management server
US20200045542A1 (en) Authentication method and system for a telecommunications system
US20230043208A1 (en) Systems and methods for providing online security
US20210194919A1 (en) System and method for protection against malicious program code injection
US20210377302A1 (en) Systems and methods for preventing the fraudulent sending of data from a computer application to a malicious third party
Aparicio de la Fuente et al. Vulnerabilities of the SMS Retriever API for the auto-matic verification of SMS OTP codes in the banking sector
CN117714201A (en) Application program login method, device, terminal and storage medium
CN114003412A (en) Method and device for communicating small program and host program
CN116756751A (en) Authentication method and device for equipment operation, storage medium and electronic device
Ker et al. Malicons: detecting payload in favicons
KR20180029151A (en) Pharming previnting method based website collected information and program thereof