CN114637982A - Transaction processing method, device, equipment and medium - Google Patents

Transaction processing method, device, equipment and medium Download PDF

Info

Publication number
CN114637982A
CN114637982A CN202210335514.7A CN202210335514A CN114637982A CN 114637982 A CN114637982 A CN 114637982A CN 202210335514 A CN202210335514 A CN 202210335514A CN 114637982 A CN114637982 A CN 114637982A
Authority
CN
China
Prior art keywords
information
voiceprint
determining
transaction
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210335514.7A
Other languages
Chinese (zh)
Inventor
杨晨
刘亚军
李美华
唐新伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210335514.7A priority Critical patent/CN114637982A/en
Publication of CN114637982A publication Critical patent/CN114637982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0609Buyer or seller confidence or verification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies

Abstract

The present disclosure provides a transaction processing method, apparatus, device, medium, which can be applied to the information technology field and the financial field. The transaction processing method comprises the following steps: receiving a transaction request initiated by a first object at a first terminal, wherein the transaction request comprises audio information and facial image information of the first object; determining attribute information of the first object based on the audio information and the face image information; sending verification information to the second terminal under the condition that the attribute information meets a preset condition, wherein the verification information is used for reminding a second object of the second terminal, and the first object executes transaction operation corresponding to the transaction request; receiving voice information returned by the second terminal according to the verification information, wherein the voice information comprises characteristic information, and the characteristic information represents the identity of the second object and the intention of the second object; and determining a result for performing the transaction operation by identifying the characteristic information.

Description

Transaction processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of information and financial technology, and in particular, to a transaction processing method, apparatus, device, medium, and program product.
Background
Along with the popularization of electronic commerce, the group of online transactions using electronic devices is also expanding. Because some special people, such as minors or old people, have little knowledge about the online transaction process, the traditional online transaction process through electronic equipment lacks the measures of transaction protection, so that the problems of unconscious consumption and household property loss of many minors or old people in online games or other online entertainment activities occur.
Disclosure of Invention
In view of the above, the present disclosure provides a transaction processing method, apparatus, device, medium, and program product.
According to an aspect of the present disclosure, there is provided a transaction processing method including:
receiving a transaction request initiated by a first object at a first terminal, wherein the transaction request comprises audio information and facial image information of the first object;
determining attribute information of the first object based on the audio information and the face image information;
sending verification information to the second terminal under the condition that the attribute information meets a preset condition, wherein the verification information is used for reminding a second object of the second terminal and authorizing the first object to execute transaction operation corresponding to the transaction request;
receiving voice information returned by the second terminal according to the verification information, wherein the voice information comprises characteristic information, and the characteristic information represents the identity of the second object and the authorization intention of the second object; and
and determining an authorization result for executing the transaction operation by identifying the characteristic information.
According to an embodiment of the present disclosure, determining attribute information of a first object based on audio information and facial image information includes:
respectively extracting a first voiceprint feature in the audio information and a skin feature in the facial image information;
attribute information of the first object is determined based on the first voiceprint feature and the skin feature.
According to an embodiment of the present disclosure, determining an authorization result for performing a transaction operation by identifying feature information includes:
extracting a second voiceprint feature in the voice information;
determining a target object corresponding to the second voiceprint feature from a voiceprint database according to the second voiceprint feature, wherein the voiceprint database comprises voiceprint feature identification information of different objects;
extracting a target field from the voice information in the case that the target object is a second object, wherein the target field represents an authorization intention of the second object;
and determining an authorization result for executing the transaction operation according to the target field.
According to an embodiment of the present disclosure, determining, from the voiceprint database, a target object corresponding to the second voiceprint feature according to the second voiceprint feature includes:
calculating the voiceprint similarity of the second voiceprint characteristic and the voiceprint characteristic identification information;
and under the condition that the voiceprint similarity is greater than a preset threshold value, determining an object corresponding to the voiceprint characteristic identification information as a target object.
According to an embodiment of the present disclosure, determining an authorization result for performing a transaction operation according to a target field includes:
determining a preset field of the second object from the voiceprint database according to the voiceprint characteristic identification information of the second object;
matching the target field with a preset field to obtain a matching result;
and determining an authorization result for executing the transaction operation according to the matching result.
According to an embodiment of the present disclosure, the transaction processing method further includes:
and sending early warning information for prompting the first object to the first terminal under the condition that the target object is not the second object.
According to an embodiment of the present disclosure, the transaction processing method further includes:
acquiring authorization of a first object for inputting audio information and facial image information;
after the authorization of the first object to input the audio information and the facial image information is obtained, inputting the audio information and the facial image information;
acquiring the voice information input by the second object pair;
and after the voice information input by the second object is obtained, the voice information is input through the second terminal.
Another aspect of the present disclosure provides a transaction processing apparatus including: the device comprises a first receiving module, a first determining module, a first sending module, a second receiving module and a second determining module. The first receiving module is used for receiving a transaction request initiated by a first object at a first terminal, wherein the transaction request comprises audio information and facial image information of the first object. And the first determining module is used for determining the attribute information of the first object according to the audio information and the facial image information. And the first sending module is used for sending verification information to the second terminal under the condition that the attribute information meets the preset condition, wherein the verification information is used for reminding a second object of the second terminal and authorizing the first object to execute the transaction operation corresponding to the transaction request. And the second receiving module is used for receiving the voice information returned by the second terminal according to the verification information, wherein the voice information comprises characteristic information, and the characteristic information represents the identity of the second object and the authorization intention of the second object. And the second determination module is used for determining an authorization result for executing the transaction operation by identifying the characteristic information.
According to an embodiment of the present disclosure, the first determination module includes a first extraction unit and a first determination unit. The first extraction unit is used for respectively extracting a first voiceprint feature in the audio information and a skin feature in the face image information. A first determining unit for determining attribute information of the first object according to the first voiceprint feature and the skin feature.
According to an embodiment of the present disclosure, the second determination module includes a second extraction unit, a second determination unit, a third extraction unit, and a third determination unit. The second extraction unit is used for extracting a second voiceprint feature in the voice information. And the second determining unit is used for determining a target object corresponding to the second voiceprint feature from the voiceprint database according to the second voiceprint feature, wherein the voiceprint database comprises voiceprint feature identification information of different objects. And a third extraction unit, configured to extract a target field from the voice information in a case that the target object is a second object, where the target field represents an authorization intention of the second object. And the third determination unit is used for determining an authorization result for executing the transaction operation according to the target field.
According to an embodiment of the present disclosure, the second determination unit includes a calculation subunit and a first determination subunit. And the calculating subunit is used for calculating the voiceprint similarity between the second voiceprint characteristic and the voiceprint characteristic identification information. And the first determining subunit is configured to determine, when the voiceprint similarity is greater than a preset threshold, that the object corresponding to the voiceprint feature identification information is the target object.
According to an embodiment of the present disclosure, the third determination unit includes a second determination subunit, a matching subunit, and a third determination subunit. And the second determining subunit is used for determining the preset field of the second object from the voiceprint database according to the voiceprint characteristic identification information of the second object. And the matching subunit is used for matching the target field and the preset field to obtain a matching result. And the third determining subunit is used for determining an authorization result for executing the transaction operation according to the matching result.
According to an embodiment of the present disclosure, the transaction processing apparatus further includes a second sending module, configured to send, to the first terminal, warning information for prompting the first object when the target object is not the second object.
According to an embodiment of the present disclosure, the transaction processing apparatus further includes a first obtaining module. The first acquisition module is used for acquiring authorization of the first object for inputting the audio information and the facial image information, and after the authorization of the first object for inputting the audio information and the facial image information is obtained, the audio information and the facial image information are input through the first terminal. The second acquisition module is used for acquiring the voice information input by the second object pair; and after the voice information is recorded by the second object, recording the voice information through the second terminal.
Another aspect of the disclosure provides an electronic device comprising one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the transaction processing method described above.
Another aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described transaction processing method.
Another aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the above-described transaction processing method.
According to the embodiment of the disclosure, by receiving a transaction request initiated by a first object at a first terminal, determining attribute information of the first object according to audio information and facial image information of the first object, and sending verification information to a second terminal under the condition that the attribute information meets a preset condition. And when the voice information returned by the second terminal is received, determining an authorization result for executing the transaction operation by identifying the characteristic information. Under the condition that the attribute information of the first object meets the preset condition, the authorization result for executing the transaction operation needs to be determined according to the voice information of the second terminal, so that the transaction behavior of the first object implemented at the first terminal can be effectively controlled, the transaction safety is improved, and the property loss is reduced.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a transaction processing method, apparatus, device, medium and program product according to an embodiment of the disclosure;
FIG. 2 schematically shows a flow diagram of a transaction processing method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart for determining attribute information of a first object according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart for determining an authorization result for performing a transaction operation according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a logical block diagram of a transaction processing method according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of the structure of a transaction processing device according to an embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of an electronic device adapted to implement a transaction processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
It should be noted that the transaction processing method and apparatus of the present disclosure can be used in the financial field and the information technology field, and can also be used in any field except the financial field and the information technology field, and the application field of the transaction processing method and apparatus of the present disclosure is not limited.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure, application and other processing of the personal information of the related object are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the object is obtained or collected, the authorization or the consent of the object is obtained.
In the related art, when a user conducts online transaction on an electronic terminal, there are no related transaction protection measures, so that a minor or an old person conducts online learning or online entertainment activities through an electronic terminal device, for example: the entertainer is swiped with a gift and is unaware of payment transactions that have occurred, resulting in property damage.
In view of this, embodiments of the present disclosure provide a transaction processing method, which receives a transaction request initiated by a first object at a first terminal, determines attribute information of the first object according to audio information and facial image information of the first object, and sends verification information to a second terminal if the attribute information satisfies a preset condition. And when the voice information returned by the second terminal is received, determining an authorization result for executing the transaction operation by identifying the characteristic information. Under the condition that the attribute information of the first object meets the preset condition, the authorization result for executing the transaction operation needs to be determined according to the voice information of the second terminal, so that the transaction behavior of the first object implemented at the first terminal can be effectively controlled, the transaction safety is improved, and the property loss is reduced.
Fig. 1 schematically illustrates an application scenario diagram of transaction processing according to an embodiment of the present disclosure.
As shown in fig. 1, the application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The backend management server may analyze and process the received data such as the object request, and feed back a processing result (for example, a web page, information, or data obtained or generated according to the object request) to the terminal device.
It should be noted that the transaction processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the transaction processing device provided by the embodiments of the present disclosure may be generally disposed in the server 105. The transaction processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the transaction processing device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The transaction processing method of the disclosed embodiment will be described in detail below with fig. 2 to 5 based on the scenario described in fig. 1.
Fig. 2 schematically shows a flow chart of a transaction processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the transaction process of this embodiment includes operations S210 to S250.
In operation S210, a transaction request initiated by a first object at a first terminal is received, wherein the transaction request includes audio information and facial image information of the first object.
According to an embodiment of the present disclosure, the audio information of the first object may be a piece of voice information entered through the electronic device, such as "please authorize a transaction". When a first object initiates a transaction request at a first terminal, the first terminal device may require the first object to enter a piece of audio information, which is entered after obtaining authorization of the first object. The face image information of the first object may be face image information of the first object entered through a front camera of the electronic apparatus. For example: one or more images including facial features of the first subject. When a first object initiates a transaction request at a first terminal, the first terminal device may require the first object to enter facial image information, which is entered after authorization of the first object is obtained. And transmitting the audio information and the face image information of the first object to the server together with the transaction request.
In an embodiment of the present disclosure, prior to acquiring the audio information and facial image information of the first object, consent or authorization of the object may be obtained. For example, a request to acquire audio information and facial image information of a first object may be issued to the first object before operation S210. In case that the subject agrees or authorizes that the subject information can be acquired, the operation S210 is performed.
In operation S220, attribute information of the first object is determined according to the audio information and the face image information.
According to an embodiment of the present disclosure, the attribute information of the first object may be age information of the first object. Because the sound wave frequency of people at different ages is different, and the compactness, the wettability, the glossiness, the facial lines and the like of the face are different, the age of the first object can be determined by identifying the audio information and the face image information of the first object through an artificial neural network model learned or trained by a machine.
In operation S230, in a case that the attribute information satisfies the preset condition, sending verification information to the second terminal, where the verification information is used to remind the second object of the second terminal and authorize the first object to perform a transaction operation corresponding to the transaction request.
According to embodiments of the present disclosure, due to the age below 18 years or above 70 years of the population, control over trading behavior is weaker compared to the 18-70 years of the population. Therefore, the preset condition may be set to an age less than 18 years or greater than 70 years. The verification information sent to the second terminal may include the transaction amount, the transaction currency, the name of the transaction order or product, payee information, and inquiry authorization information. Wherein the transaction amount, transaction currency, name of the transaction order or product, and payee information are derived from the information in the transaction request. For example: in case it is determined that user a is less than 18 years old, authentication information is sent to the second terminal, user B of which is typically the guardian of user a. The transmitted authentication information is, for example: and A, purchasing a product A in XX in X month and X day of X year, wherein the product price is XX yuan, and asking whether the product is authorized.
In operation S240, voice information returned by the second terminal according to the verification information is received, wherein the voice information includes characteristic information, and the characteristic information represents an identity of the second object and an authorization intention of the second object.
According to the embodiment of the disclosure, the voice message may be entered at the second terminal for the second object, and when the second terminal receives the verification message, the second object may be required to reply to the verification message in a form of voice. After obtaining authorization of the second object, voice information may be entered by the second object through the second terminal. For example: the content of the voice message may be "i agree to authorization". The "agreeing" or "disagreeing" user in the content of the voice message indicates an authorization intention of the second object. The sonic wavelength or other voiceprint characteristics of the voice information can be used to characterize the identity of the second object.
In an embodiment of the present disclosure, prior to obtaining the voice information of the second subject, consent or authorization of the subject may be obtained. For example, a request for acquiring voice information of the second object may be issued to the second object before operation S240. In case that the subject agrees or authorizes that the subject information can be acquired, the operation S240 is performed.
In operation S250, an authorization result for performing a transaction operation is determined by identifying the characteristic information.
According to an embodiment of the present disclosure, the feature information may include a voiceprint feature and a content feature in the voice information. For example: by recognizing the voiceprint feature, it is determined that the voice information was uttered by the guardian of the first subject himself. If it is recognized that the voice information is not sent by the guardian of the first object himself, the transaction can be terminated directly.
According to the embodiment of the disclosure, in the case that the guardian who determines that the voice information is the first object sends out by himself, the authorization intention of the second object is determined by recognizing the content features. If the authorization intention of the second object is to agree with authorization, it indicates that the first object can continue to perform the transaction at the first terminal. If the authorization intention of the second object is not to agree with the authorization, the transaction behavior of the first object at the first terminal is aborted and the transaction cannot be continued.
According to the embodiment of the disclosure, by receiving a transaction request initiated by a first object at a first terminal, determining attribute information of the first object according to audio information and facial image information of the first object, and sending verification information to a second terminal under the condition that the attribute information meets a preset condition. And when the voice information returned by the second terminal is received, determining an authorization result for executing the transaction operation by identifying the characteristic information. Under the condition that the attribute information of the first object meets the preset condition, the authorization result for executing the transaction operation needs to be determined according to the voice information of the second terminal, so that the transaction behavior of the first object implemented at the first terminal can be effectively controlled, the transaction safety is improved, and the property loss is reduced.
Fig. 3 schematically shows a flow chart for determining attribute information of a first object according to an embodiment of the present disclosure.
As shown in fig. 3, the method of determining attribute information of a first object according to this embodiment includes operations S310 to S320.
In operation S310, a first voiceprint feature in the audio information and a skin feature in the face image information are respectively extracted.
According to an embodiment of the present disclosure, the first voiceprint characteristic may include a wavelength of the sound wave, a frame rate of the sound wave, a frame number of the sound wave, a characteristic peak of the sound wave, and the like. The skin features in the facial image information may include the firmness of skin cells, the moisture content of cells, the glossiness of skin, lines of the surface layer of skin, and the like.
In operation S320, attribute information of the first object is determined according to the first voiceprint feature and the skin feature.
According to an embodiment of the present disclosure, the attribute information of the first object may be age information of the first object. The weights may be set by the first voiceprint characteristic and the skin characteristic. And obtaining a voiceprint characteristic recognition model and a skin characteristic recognition model through machine learning or training of a neural network model. The voiceprint feature recognition model outputs a first predicted age of the first subject by recognizing a first voiceprint feature. The skin feature identification model outputs a second predicted age of the first subject by identifying skin features. For example: the weight of the voiceprint features is m, the weight of the skin features is n, the first preset age is A, the second predicted age is B, and finally the age information of the first object can be obtained through calculation through Am + Bn.
According to the embodiment of the disclosure, a voiceprint feature recognition model can be obtained by supervising and training a neural network model by using the ages of sound sources as labels through a large number of different sound wave wavelengths, the frame frequency of sound waves, the frame number of sound waves and the sample audio data of the feature peaks of the sound waves.
According to the embodiment of the disclosure, the skin feature recognition model can be obtained by monitoring and training the neural network model by using the sample facial image data of the compactness of a large number of different skin cells, the water content of the cells, the glossiness of the skin and the lines on the surface layer of the skin and taking the age as a label.
According to the embodiment of the disclosure, the attribute information of the first object is determined by extracting the voiceprint features and the skin features, and the object attribute is determined by two dominant features representing the object attribute, so that the accuracy is high.
Fig. 4 schematically illustrates a flow chart for determining an authorization result for performing a transaction operation according to an embodiment of the disclosure.
As shown in fig. 4, the method of determining an authorization result for performing a transaction operation of this embodiment includes operations S410 to S440.
In operation S410, a second voiceprint feature in the speech information is extracted.
According to an embodiment of the present disclosure, the second acoustic line feature may include a wavelength of the acoustic wave, a frame rate of the acoustic wave, a frame number of the acoustic wave, a characteristic peak of the acoustic wave.
In operation S420, a target object corresponding to the second voiceprint feature is determined from a voiceprint database according to the second voiceprint feature, where the voiceprint database includes voiceprint feature identification information of different objects.
According to an embodiment of the present disclosure, the voiceprint feature identification information may include voiceprint feature data stored in the voiceprint database by different objects in advance, for example: the wave length of the sound wave, the frame frequency of the sound wave, the frame number of the sound wave and the characteristic peak of the sound wave. The target object corresponding to the second voiceprint feature can be determined by matching the data in the second voiceprint feature with the voiceprint feature data in the voiceprint database. For example: the second voiceprint feature can be a voiceprint wavelength a, a frame frequency B of a voiceprint, a frame number c of a voiceprint, a feature peak d of a voiceprint, voiceprint feature data of the object a in the voiceprint database are the voiceprint wavelength a and the frame frequency B of the voiceprint, the frame number e of the voiceprint and the feature peak d of the voiceprint, voiceprint feature data of the object B in the voiceprint database are the voiceprint wavelength a and the frame frequency B of the voiceprint, the frame number c of the voiceprint and the feature peak d of the voiceprint, the voiceprint feature data of the object B is matched with the second voiceprint feature, it can be determined that voice information is sent by the object B, and the object B is a target object.
In operation S430, in case that the target object is a second object, a target field is extracted from the voice information, wherein the target field characterizes an authorization intention of the second object.
According to an embodiment of the present disclosure, the second object may be a guardian of the first object, and in case it is determined that the object B is the guardian of the first object, the target field may be extracted from the voice information. The extraction of the target field can adopt word segmentation technology, such as: the voice information can be 'I agree to authorization', and the voice information is processed by utilizing a word segmentation technology to obtain a plurality of fields: i/agree/authorize. The target field may be an agreement/authorization.
In operation S440, an authorization result for performing the transaction operation is determined according to the target field.
According to an embodiment of the present disclosure, the authorization intention of the second object may be determined through the target field. For example: when the target field is approval/authorization, it indicates that the second object authorizes the first object to perform the transaction operation at the first terminal. When the target field is not/agrees/authorized, the second object does not authorize the first object to execute the transaction operation at the first terminal.
According to the embodiment of the disclosure, the identity of the second object is determined through the voiceprint features in the voice message, and then the authorization intention in the voice message is determined.
According to an embodiment of the present disclosure, determining, from the voiceprint database, a target object corresponding to the second voiceprint feature according to the second voiceprint feature includes:
calculating the voiceprint similarity of the second voiceprint characteristic and the voiceprint characteristic identification information;
and under the condition that the voiceprint similarity is greater than a preset threshold value, determining an object corresponding to the voiceprint characteristic identification information as a target object.
According to the embodiment of the disclosure, the voiceprint feature vectors can be respectively constructed according to the second voiceprint feature and the voiceprint feature identification information, and then the cosine similarity of the voiceprint feature vectors is calculated. For example: and the second voiceprint feature can be the wavelength a of the sound wave, the frame frequency b of the sound wave, the frame number c of the sound wave and the feature peak d of the sound wave, so that a voiceprint feature vector M can be constructed to be (a, b, c, d). The voiceprint feature identification information of the object a in the voiceprint database can be the wavelength a of the sound wave, the frame frequency b of the sound wave, the frame number e of the sound wave and the feature peak d of the sound wave, and then a voiceprint feature vector N of the object a can be constructed to be (a, b, e, d). The voiceprint feature identification information of the object B in the voiceprint database can be the wavelength a of the sound wave, the frame frequency B of the sound wave, the frame number c of the sound wave and the feature peak f of the sound wave, and then a voiceprint feature vector P (a, B, c, f) of the object B can be constructed.
According to an embodiment of the present disclosure, for example: and calculating the cosine similarity between the voiceprint feature vector M and the voiceprint feature vector N to obtain the voiceprint similarity X1 between the second voiceprint feature and the object A. And calculating the cosine similarity of the voiceprint feature vector M and the voiceprint feature vector P to obtain the voiceprint similarity X2 between the second voiceprint feature and the object B.
According to an embodiment of the present disclosure, for example: the preset threshold is X, where X1 is less than X and X2 is greater than X, then object B may be determined to be the target object.
According to an embodiment of the disclosure, the second object is a holder of the second terminal, and the identity information thereof may be stored in a database for determining the identity of the target object.
According to the embodiment of the disclosure, the target object is determined by calculating the similarity of the voiceprint features, and the identity of the object sending the voice information is determined by utilizing the uniqueness of the voiceprint of each person, so that the accuracy of identity verification can be improved, and the risk of property loss caused by impersonation authorization is reduced.
According to an embodiment of the present disclosure, determining an authorization result for performing a transaction operation according to a target field includes:
determining a preset field of the second object from the voiceprint database according to the voiceprint characteristic identification information of the second object;
matching the target field with a preset field to obtain a matching result;
and determining an authorization result for executing the transaction operation according to the matching result.
According to the embodiment of the disclosure, preset fields customized by different users can be stored in the voiceprint database. For example: "XXX grants the right to XXX transactions". The target field in the voice message is grant/authorization. And matching the target field and the default field in the voice message to obtain that the target field in the voice message is not matched with the preset field, wherein the authorization fails to indicate that the second object does not grant the right of the first object to execute the transaction at the first terminal.
According to the embodiment of the disclosure, by matching the target field in the authorized voice with the preset field of the second object, even if the related personnel uses the recording impersonation of the second object to send voice information at the second terminal, and determines that the second terminal is the second object, if the target field of the authorized voice in the recording is not matched with the preset field, the transaction authorization still cannot be obtained, so that the risk caused by the impersonation authorization transaction is further reduced.
According to an embodiment of the present disclosure, the transaction processing method further includes: and sending early warning information for prompting the first object to the first terminal under the condition that the target object is not the second object.
According to the embodiment of the present disclosure, if the target object determined through the recognition of the voiceprint feature is not the second object, the warning information may be transmitted to the first terminal. The form of the early warning information includes but is not limited to short messages, pop-up window prompting information and the like. The content of the warning message may be "risk of transaction, suggestion to abort transaction", etc.
According to the embodiment of the disclosure, the early warning information is sent to the first terminal, so that the first object can be prompted to have risks in transaction, and the first object can be facilitated to stop transaction behaviors in time.
According to an embodiment of the present disclosure, the transaction processing method further includes:
acquiring the audio information and the facial image information input by the first object pair;
after the first object input audio information and the face image information are obtained, the audio information and the face image information are input through the first terminal;
acquiring the voice information input by the second object pair;
and after the voice information input by the second object is obtained, the voice information is input through the second terminal.
According to the embodiment of the disclosure, when the authorization of the first object for inputting the audio information and the facial image information is obtained, the authorization can be obtained by sending the prompt information to the first terminal. For example: "whether to allow the microphone to be turned on for audio entry", "whether to allow the camera to be turned on for facial image entry", and so on. After being authorized, the first object may enter audio information and facial image information through the first terminal.
According to the embodiment of the disclosure, when the authorization of the first object to the voice information is obtained, the authorization can be obtained by sending the prompt message to the second terminal. For example: "whether to allow the microphone to be turned on for audio entry", etc., the second object may enter voice information through the second terminal after being authorized.
According to the embodiment of the disclosure, the audio information, the facial image information and the voice information are all input after the authorization of the terminal operation object is obtained, so that the information security of the user is guaranteed.
Fig. 5 schematically illustrates a logic block diagram of a transaction processing method according to an embodiment of the present disclosure.
The transaction processing method of this embodiment includes operations S501 to S512.
Receiving a transaction request initiated by a first object at a first terminal in operation S501, wherein the transaction request includes audio information and facial image information of the first object;
determining attribute information of the first object based on the audio information and the face image information in operation S502;
in operation S503, it is determined whether the attribute information of the first object satisfies a preset condition, and if not, operation S504 is performed; if so, operation S505 is performed.
In operation S504, the transaction request is processed, and a transaction operation corresponding to the transaction request is executed.
In operation S505, sending verification information to the second terminal, where the verification information is used to remind a second object of the second terminal, and the first object executes a transaction operation corresponding to the transaction request;
in operation S506, receiving voice information returned by the second terminal according to the verification information, wherein the voice information includes feature information, and the feature information represents an identity of the second object and an intention of the second object;
in operation S507, it is determined whether the target object is the second object by identifying the voiceprint feature information in the feature information, and if not, operation S508 is performed, and if so, operation S509 is performed.
In operation S508, the transaction is aborted and warning information is transmitted to the first terminal.
Extracting a target field from the voice information, wherein the target field represents an intention of the second object, in operation S509;
in operation S510, it is determined whether the target field matches the preset field of the second object, if so, operation S511 is performed, and if not, operation S512 is performed.
In operation S511, the authorization is passed, the transaction request is processed, and a transaction operation corresponding to the transaction request is performed.
In operation S512, the authorization is not passed, and the processing of the transaction request is aborted.
Based on the transaction processing method, the disclosure also provides a transaction processing device. The apparatus will be described in detail below with reference to fig. 6.
Fig. 6 schematically shows a block diagram of a transaction apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the transaction processing apparatus 600 of this embodiment includes a first receiving module 610, a first determining module 620, a first sending module 630, a second receiving module 640, and a second determining module 650.
The first receiving module 610 is configured to receive a transaction request initiated by a first object at a first terminal, where the transaction request includes audio information and facial image information of the first object. In an embodiment, the first receiving module 610 may be configured to perform the operation S210 described above, which is not described herein again.
The first determining module 620 is configured to send verification information to the second terminal when the attribute information meets a preset condition, where the verification information is used to remind a second object of the second terminal, and the first object executes a transaction operation corresponding to the transaction request. In an embodiment, the first determining module 620 and the module 820 may be configured to perform the operation S220 described above, and are not described herein again.
The first sending module 630 is configured to send verification information to the second terminal when the attribute information meets a preset condition, where the verification information is used to remind a second object of the second terminal, and the first object executes a transaction operation corresponding to the transaction request. In an embodiment, the first sending module 630 may be configured to perform the operation S230 described above, which is not described herein again.
The second receiving module 640 is configured to receive the voice information returned by the second terminal according to the verification information, where the voice information includes feature information, and the feature information represents an identity of the second object and an intention of the second object. In an embodiment, the second receiving module 640 may be configured to perform the operation S240 described above, which is not described herein again.
The second determination module 650 is used for determining a result for performing the transaction operation by identifying the characteristic information. In an embodiment, the second determining module 650 may be configured to perform the operation S250 described above, and is not described herein again.
According to an embodiment of the present disclosure, the first determination module includes a first extraction unit and a first determination unit. The first extraction unit is used for respectively extracting a first voiceprint feature in the audio information and a skin feature in the face image information. A first determining unit for determining attribute information of the first object according to the first voiceprint feature and the skin feature.
According to an embodiment of the present disclosure, the second determination module includes a second extraction unit, a second determination unit, a third extraction unit, and a third determination unit. The second extraction unit is used for extracting a second voiceprint feature in the voice information. And the second determining unit is used for determining a target object corresponding to the second voiceprint feature from the voiceprint database according to the second voiceprint feature, wherein the voiceprint database comprises voiceprint feature identification information of different objects. And a third extraction unit, configured to extract a target field from the voice information if the target object is the second object, where the target field represents an authorization intention of the second object. And the third determination unit is used for determining an authorization result for executing the transaction operation according to the target field.
According to an embodiment of the present disclosure, the second determination unit includes a calculation subunit and a first determination subunit. And the calculating subunit is used for calculating the voiceprint similarity between the second voiceprint characteristic and the voiceprint characteristic identification information. And the first determining subunit is configured to determine, when the voiceprint similarity is greater than a preset threshold, that the object corresponding to the voiceprint feature identification information is the target object.
According to an embodiment of the present disclosure, the third determining unit includes a second determining subunit, a matching subunit, and a third determining subunit. And the second determining subunit is used for determining the preset field of the second object from the voiceprint database according to the voiceprint characteristic identification information of the second object. And the matching subunit is used for matching the target field and the preset field to obtain a matching result. And the third determining subunit is used for determining an authorization result for executing the transaction operation according to the matching result.
According to an embodiment of the present disclosure, the transaction processing apparatus further includes a second sending module, configured to send, to the first terminal, warning information for prompting the first object when the target object is not the second object.
According to an embodiment of the present disclosure, the transaction processing apparatus further includes a first obtaining module. The first acquisition module is used for acquiring authorization of the first object for inputting the audio information and the facial image information, and after the authorization of the first object for inputting the audio information and the facial image information is obtained, the audio information and the facial image information are input through the first terminal. The second acquisition module is used for acquiring the voice information input by the second object pair; and after the voice information input by the second object is obtained, the voice information is input through the second terminal.
According to the embodiment of the present disclosure, any plurality of the first receiving module 610, the first determining module 620, the first sending module 630, the second receiving module 640, and the second determining module 650 may be combined into one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first receiving module 610, the first determining module 620, the first sending module 630, the second receiving module 640, and the second determining module 650 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or by a suitable combination of any several of them. Alternatively, at least one of the first receiving module 610, the first determining module 620, the first transmitting module 630, the second receiving module 640, the second determining module 650 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
Fig. 7 schematically shows a block diagram of an electronic device adapted to implement a transaction processing method according to an embodiment of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM702 and RAM 703. The processor 701 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 700 may also include input/output (I/O) interface 705, which input/output (I/O) interface 705 is also connected to bus 704, according to an embodiment of the present disclosure. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that the computer program read out therefrom is mounted in the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement a transaction processing method according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM702 and/or the RAM 703 and/or one or more memories other than the ROM702 and the RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated by the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 701. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The above described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the subject computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the subject computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A transaction processing method, comprising:
receiving a transaction request initiated by a first object at a first terminal, wherein the transaction request comprises audio information and facial image information of the first object;
determining attribute information of the first object according to the audio information and the facial image information;
sending verification information to a second terminal under the condition that the attribute information meets a preset condition, wherein the verification information is used for reminding a second object of the second terminal and authorizing the first object to execute a transaction operation corresponding to the transaction request;
receiving voice information returned by the second terminal according to the verification information, wherein the voice information comprises characteristic information, and the characteristic information represents the identity of the second object and the authorization intention of the second object; and
and determining an authorization result for executing the transaction operation by identifying the characteristic information.
2. The method of claim 1, wherein said determining attribute information of the first object from the audio information and the facial image information comprises:
respectively extracting a first voiceprint feature in the audio information and a skin feature in the facial image information;
determining attribute information of the first object according to the first voiceprint feature and the skin feature.
3. The method of claim 1, wherein said determining an authorization result for performing the transaction operation by identifying the characteristic information comprises:
extracting a second voiceprint feature in the voice information;
determining a target object corresponding to the second voiceprint feature from a voiceprint database according to the second voiceprint feature, wherein the voiceprint database comprises voiceprint feature identification information of different objects;
extracting a target field from the voice information in the case that the target object is the second object, wherein the target field characterizes an authorization intention of the second object;
determining an authorization result for performing the transaction operation according to the target field.
4. The method of claim 3, wherein the determining, from the second voiceprint feature, a target object from a voiceprint database that corresponds to the second voiceprint feature comprises:
calculating the voiceprint similarity of the second voiceprint feature and the voiceprint feature identification information;
and under the condition that the voiceprint similarity is greater than a preset threshold value, determining an object corresponding to the voiceprint characteristic identification information as the target object.
5. The method of claim 3, wherein said determining an authorization result for performing the transaction operation based on the target field comprises:
determining a preset field of the second object from a voiceprint database according to the voiceprint feature identification information of the second object;
matching the target field with the preset field to obtain a matching result;
and determining an authorization result for executing the transaction operation according to the matching result.
6. The method of claim 3, further comprising:
and sending early warning information for prompting the first object to the first terminal under the condition that the target object is not the second object.
7. The method of claim 1, further comprising:
obtaining authorization of the first object to enter the audio information and the facial image information;
after the authorization of the first object for inputting the audio information and the facial image information is obtained, inputting the audio information and the facial image information through the first terminal;
obtaining authorization of the second object for inputting the voice information;
and after the authorization of the second object for inputting the voice information is obtained, the voice information is input through the second terminal.
8. A transaction processing device comprising:
the system comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving a transaction request initiated by a first object at a first terminal, and the transaction request comprises audio information and facial image information of the first object;
a first determination module for determining attribute information of the first object based on the audio information and the facial image information;
the first sending module is used for sending verification information to a second terminal under the condition that the attribute information meets a preset condition, wherein the verification information is used for reminding a second object of the second terminal and authorizing the first object to execute the transaction operation corresponding to the transaction request;
a second receiving module, configured to receive voice information returned by the second terminal according to the verification information, where the voice information includes feature information, and the feature information represents an identity of the second object and an authorization intention of the second object; and
and the second determining module is used for determining an authorization result for executing the transaction operation by identifying the characteristic information.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 7.
11. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 7.
CN202210335514.7A 2022-03-30 2022-03-30 Transaction processing method, device, equipment and medium Pending CN114637982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210335514.7A CN114637982A (en) 2022-03-30 2022-03-30 Transaction processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210335514.7A CN114637982A (en) 2022-03-30 2022-03-30 Transaction processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114637982A true CN114637982A (en) 2022-06-17

Family

ID=81952697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210335514.7A Pending CN114637982A (en) 2022-03-30 2022-03-30 Transaction processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114637982A (en)

Similar Documents

Publication Publication Date Title
EP3477519B1 (en) Identity authentication method, terminal device, and computer-readable storage medium
US11893099B2 (en) Systems and methods for dynamic passphrases
JP5695709B2 (en) Method and system for validating personal account identifiers using biometric authentication and self-learning algorithms.
US20190050632A1 (en) Method and apparatus for generating training data for human face recognition, device and computer storage medium
US10747866B2 (en) Transaction approval based on a scratch pad
CN107393541B (en) Information verification method and device
KR20160011709A (en) Method, apparatus and system for payment validation
US20150161613A1 (en) Methods and systems for authentications and online transactions
KR20200048201A (en) Electronic device and Method for controlling the electronic device thereof
CN113826135B (en) System, method and computer system for contactless authentication using voice recognition
US20210390445A1 (en) Systems and methods for automatic decision-making with user-configured criteria using multi-channel data inputs
CN112201254A (en) Non-sensitive voice authentication method, device, equipment and storage medium
US10565578B2 (en) Department of defense point of sale
CN114637982A (en) Transaction processing method, device, equipment and medium
US20210398135A1 (en) Data processing and transaction decisioning system
CA3156390A1 (en) Systems and methods for providing in-person status to a user device
US11755118B2 (en) Input commands via visual cues
CN113343211A (en) Data processing method, processing system, electronic device and storage medium
US11803898B2 (en) Account establishment and transaction management using biometrics and intelligent recommendation engine
US20230011451A1 (en) System and method for generating responses associated with natural language input
EP4075364A1 (en) Method for determining the likelihood for someone to remember a particular transaction
CN115034904A (en) Transaction admission auditing method, apparatus, device, medium and program product
CN117808299A (en) Service handling method, device, equipment and medium
CN115731598A (en) Customer service method, apparatus, device and medium
CN113393318A (en) Bank card application wind control method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination