CN113470278A - Self-service payment method and device - Google Patents

Self-service payment method and device Download PDF

Info

Publication number
CN113470278A
CN113470278A CN202110739160.8A CN202110739160A CN113470278A CN 113470278 A CN113470278 A CN 113470278A CN 202110739160 A CN202110739160 A CN 202110739160A CN 113470278 A CN113470278 A CN 113470278A
Authority
CN
China
Prior art keywords
payment
user
information
voice
voice information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110739160.8A
Other languages
Chinese (zh)
Inventor
冷真敏
傅强
张舜华
李昭莹
李娟�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202110739160.8A priority Critical patent/CN113470278A/en
Publication of CN113470278A publication Critical patent/CN113470278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F19/00Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses a self-service payment method and a self-service payment device, which relate to the technical field of artificial intelligence, in particular to the technical field of human-computer interaction and natural language processing, wherein the method comprises the steps of responding to the detected face information of a user, collecting the face information of the user, and verifying the face information of the user; responding to the verification, determining a user identification corresponding to the face information of the user, determining a payment item list according to the user identification, and broadcasting each payment item in the list by voice; receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list; searching a corresponding payment bill according to the target payment project, and broadcasting the name of the payment bill and the corresponding payment amount in a voice way; and responding to the name of the payment bill confirmed by the user and the corresponding payment amount, and paying and deducting based on the payment bill. The realization changes into through people and terminal equipment's text interaction and only can carry out self-service payment through voice interaction and facial recognition, and is convenient, swift, simple and safety.

Description

Self-service payment method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to the technical field of human-computer interaction and natural language processing, and particularly relates to a self-service payment method and device.
Background
The existing payment process is monotonous in operation, a user needs to have a basic page operation foundation, and a large number of old users or users with low culture level cannot enjoy convenience brought by on-line payment.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
the self-service payment mode of man-machine interaction through character input enables old users or users with low cultural level to be incapable of enjoying convenience brought by on-line payment.
Disclosure of Invention
In view of this, the embodiment of the application provides a self-service payment method and device, which can solve the problem that the old user or the user with low cultural level cannot enjoy convenience brought by on-line payment due to the existing self-service payment mode of performing human-computer interaction through text input.
In order to achieve the above object, according to an aspect of the embodiments of the present application, there is provided a self-service payment method, including:
in response to the detection of the face information of the user, acquiring the face information of the user, and further verifying the face information of the user;
in response to the verification passing, determining a user identifier corresponding to the face information of the user, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for the user to select;
receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list;
searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm;
and responding to the name of the confirmed payment bill of the user and the corresponding payment amount, and skipping to a payment page to pay and deduct money based on the payment bill.
Optionally, the verifying the face information of the user includes:
calling a face information base, and matching the face information of the user with each piece of face information in the face information base based on a face recognition algorithm;
and determining that the verification is passed in response to determining that the face information in the face information base is matched with the face information of the user.
Optionally, parsing the voice information includes:
and inputting the voice information into a pre-trained voice recognition model to generate corresponding text information.
Optionally, the pre-trained speech recognition model is used to characterize a correspondence of speech information to text information.
Optionally, parsing the voice information includes:
determining an accent identifier corresponding to the voice information;
calling a voice information base corresponding to the accent identifier, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
Optionally, parsing the voice information includes:
determining language identification corresponding to the voice information;
calling a voice information base corresponding to the language identification, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
Optionally, determining a target payment item in the payment item list includes:
and matching the text information with each payment item in the payment item list to determine the corresponding payment item, and further determining the corresponding payment item as a target payment item.
Optionally, the payment deduction is performed based on the payment order, including:
prompting a user to acquire face information by voice, and further acquiring the face information;
and carrying out payment verification based on the collected face information, responding to the fact that the collected face information is matched with the face information corresponding to the user identification, determining that the verification is passed, and then carrying out payment deduction of corresponding payment amount based on the payment bill.
Optionally, determining a payment item list according to the user identifier includes:
determining each payment item associated with the user identifier;
and generating a payment item list according to each payment item.
In addition, this application still provides a device of collecting fee by oneself, includes:
the information acquisition unit is configured to respond to the detection of the face information of the user, acquire the face information of the user and then verify the face information of the user;
the payment item list determining unit is configured to determine a user identifier corresponding to the face information of the user in response to the verification passing, determine a payment item list according to the user identifier, and voice-broadcast each payment item in the payment item list for the user to select;
the target payment item determining unit is configured to receive voice information of a user and analyze the voice information to determine a target payment item in the payment item list;
the payment bill searching unit is configured to search a corresponding payment bill according to the target payment item, and further voice-broadcast the name of the payment bill and the corresponding payment amount for confirmation of a user;
and the payment unit is configured to jump to a payment page to pay and deduct money based on the payment bill in response to determining that the name of the payment bill and the corresponding payment amount are confirmed by the user.
Optionally, the information acquisition unit is further configured to:
calling a face information base, and matching the face information of the user with each piece of face information in the face information base based on a face recognition algorithm;
and determining that the verification is passed in response to determining that the face information in the face information base is matched with the face information of the user.
Optionally, the target payment item determination unit is further configured to:
and inputting the voice information into a pre-trained voice recognition model to generate corresponding text information.
Optionally, the pre-trained speech recognition model is used to characterize a correspondence of speech information to text information.
Optionally, the target payment item determination unit is further configured to:
determining an accent identifier corresponding to the voice information;
calling a voice information base corresponding to the accent identifier, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
Optionally, the target payment item determination unit is further configured to:
determining language identification corresponding to the voice information;
calling a voice information base corresponding to the language identification, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
Optionally, the target payment item determination unit is further configured to:
and matching the text information with each payment item in the payment item list to determine the corresponding payment item, and further determining the corresponding payment item as a target payment item.
Optionally, the payment unit is further configured to:
prompting a user to acquire face information by voice, and further acquiring the face information;
and carrying out payment verification based on the collected face information, responding to the fact that the collected face information is matched with the face information corresponding to the user identification, determining that the verification is passed, and then carrying out payment deduction of corresponding payment amount based on the payment bill.
Optionally, the payment item list determining unit is further configured to:
determining each payment item associated with the user identifier;
and generating a payment item list according to each payment item.
In addition, this application still provides a self-service electronic equipment that collects fee, includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the self-service payment method.
In addition, the application also provides a computer readable medium, on which a computer program is stored, and when the program is executed by a processor, the self-service payment method is realized.
One embodiment of the above invention has the following advantages or benefits: the method comprises the steps of collecting user face information in response to the detection of the user face information, and further verifying the user face information; in response to the verification passing, determining a user identifier corresponding to the face information of the user, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for the user to select; receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list; searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm; and responding to the name of the confirmed payment bill of the user and the corresponding payment amount, and skipping to a payment page to pay and deduct money based on the payment bill. Therefore, the user can complete self-service payment only by performing voice interaction with the terminal equipment through facial recognition and voice prompt without manual input in the whole self-service payment process. The realization is changed into through people and intelligent equipment's voice interaction and is assisted with facial recognition by the interactive change of characters through people and terminal equipment and can carry out self-service payment for whole journey of self-service payment need not artifical input characters, thereby makes self-service payment convenient, swift, simple and safety, makes old user or low cultural level's user also can enjoy the convenience of the bringing of paying fee on line.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a further understanding of the application and are not to be construed as limiting the application. Wherein:
fig. 1 is a schematic diagram of a main flow of a self-service payment method according to a first embodiment of the present application;
fig. 2 is a schematic diagram of a main flow of a self-service payment method according to a second embodiment of the present application;
fig. 3 is a schematic view of an application scenario of a self-service payment method according to a third embodiment of the present application;
FIG. 4 is a schematic diagram of the main modules of a self-service payment device according to an embodiment of the application;
FIG. 5 is an exemplary system architecture diagram to which embodiments of the present application may be applied;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic view of a main flow of a self-service payment method according to a first embodiment of the present application, and as shown in fig. 1, the self-service payment method includes:
step S101, responding to the detected user face information, collecting the user face information, and further verifying the user face information.
In this embodiment, an execution main body (for example, a server) of the self-service payment method may detect whether user face information exists in front in real time through a data acquisition module based on a self-service or intelligent device camera, and when the user face information exists in front, the execution main body may call a face information acquisition module in the data acquisition module to acquire the user face information, and may further call a face recognition algorithm module in the algorithm module to verify the acquired user face information. The face recognition algorithm module is responsible for analyzing the collected face information and comparing the face information with the user images stored in the system. The data acquisition module is based on a self-service or intelligent equipment camera and acquires face information of a user.
And step S102, responding to the verification passing, determining a user identification corresponding to the face information of the user, determining a payment project list according to the user identification, and broadcasting each payment project in the payment project list through voice for the user to select.
When the execution main body verifies the face information of the user based on the face recognition algorithm module and determines that the verification is passed, namely the system stores the matching of the user portrait and the acquired face information of the user, the execution main body can determine the user identification corresponding to the face information of the user, wherein the user identification can be a mobile phone number of the user, an identity card number of the user, a user name and the like, and the specific content of the user identification is not limited by the application. After the user identification is determined, the execution main body can determine each unique corresponding payment item according to the user identification, and then generates a payment item list based on each payment item. The execution main body can broadcast the payment project list through voice so as to remind a user of selecting a payment project which is desired to be paid, namely a target payment project. For example, the executive may report: "the payment items of the user XX are: the method comprises the following steps of a paying fee, b paying fee, c paying fee and d paying fee, wherein a user can select one of the paying fee items to pay fee, a plurality of paying fee items can also be selected to combine to pay fee, then the user is asked to check the paying fee, and finally a confirmation button is clicked to finish the selection of the paying fee items.
In this embodiment, determining a payment item list according to the user identifier includes:
each payment item associated with the user identifier is determined, and specifically, one user identifier may correspond to one or more payment items. One payment item corresponds to one user identifier. The payment items may include payment names and payment amounts.
And generating a payment item list according to each payment item. After determining the payment items associated with the user identifier, the execution main body may arrange the payment items according to a preset arrangement order (for example, time from near to far), so as to generate a payment item list.
Step S103, receiving voice information of the user, and analyzing the voice information to determine a target payment item in the payment item list.
Specifically, the execution subject may collect voice information of the user based on a self-service or only a device microphone.
In some optional implementations of this embodiment, parsing the voice information includes:
determining an accent identifier corresponding to the voice information; calling a voice information base corresponding to the accent identifier, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base; and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
For example, when the user is a foreign person and utters voice information with an accent different from the local, the execution body may call a voice information collection module in the data collection module to collect voice information of the user. After receiving the voice information of the user, the execution can analyze the voice information. Specifically, the execution subject may first determine the accent of the voice message, and then may select, according to the accent, a voice message library that is the same as the accent or has accent similarity greater than a threshold, and then perform similarity matching between the received voice message of the user and each voice message in the voice message library. Specifically, an algorithm module may be called to convert the received voice information of the user into a voice vector, and then each voice information in the voice information base is also converted into a voice vector. Then, the execution subject may respectively calculate similarities (specifically, cosine similarities) between the speech vector corresponding to the speech information of the user and the speech vectors converted from the speech information in the speech information library. And then the execution main body can determine the voice information corresponding to the voice vector in the voice information base corresponding to the maximum similarity as target voice information, further locate the corresponding text information stored in the voice information base according to the target voice information, and determine the corresponding text information as an analysis result of the voice information of the user.
It is to be understood that the speech information library may be a key-value pair database of speech information and corresponding text information that the execution subject previously stored that is the same as or related to an accent of the speech information that the user may utter.
In some optional implementations of this embodiment, parsing the voice information includes:
determining language identification corresponding to the voice information; calling a voice information base corresponding to the language identification, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base; and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
For example, when the user is a foreigner and utters voice information in a language different from the local language, such as english, korean, or the like, the execution body may call a voice information collection module in the data collection module to collect the voice information of the user. After receiving the voice information of the user, the execution can analyze the voice information. Specifically, the execution main body may first determine the language of the voice information, and then may select a voice information base that is the same as the language or has a language similarity greater than a threshold according to the language, and further perform similarity matching between the received voice information of the user and each voice information in the voice information base. Specifically, an algorithm module may be called to convert the received voice information of the user into a voice vector, and then each voice information in the voice information base is also converted into a voice vector. Then, the execution subject may respectively calculate similarities (specifically, cosine similarities) between the speech vector corresponding to the speech information of the user and the speech vectors converted from the speech information in the speech information library. And then the execution main body can determine the voice information corresponding to the voice vector in the voice information base corresponding to the maximum similarity as target voice information, further locate the corresponding text information stored in the voice information base according to the target voice information, and determine the corresponding text information as an analysis result of the voice information of the user.
It is understood that the speech information library may be a key-value pair database of speech information and corresponding text information that the execution subject previously stored in the same or related language as the speech information that the user may utter.
In some optional implementation manners of this embodiment, determining a target payment item in the payment item list includes:
the text information is matched with each payment item in the payment item list, specifically, cosine similarity between the text information and each payment item in the payment item list can be calculated to determine the corresponding payment item, and specifically, the payment item in the payment item list corresponding to the maximum similarity can be determined as the target payment item.
And step S104, searching the corresponding payment bill according to the target payment project, and then broadcasting the name of the payment bill and the corresponding payment amount through voice for the user to confirm. The payment items correspond to the only payment bill, and the corresponding payment bill can be searched from the database according to the target payment items. After the execution main body finds the payment bill, the date, the name and the payment amount of the payment bill can be broadcasted through voice so as to be confirmed by a user.
Step S105, responding to the name of the payment bill confirmed by the user and the corresponding payment amount, jumping to a payment page, and carrying out payment deduction based on the payment bill.
When the user jumps to the payment page to pay and deduct money, the execution main body can verify whether the payment user is the user corresponding to the payment bill in advance. Specifically, the executing body can acquire the face information of the payment user again, match the acquired face information corresponding to the payment bill, and execute payment deduction if matching is successful.
In this embodiment, collect fee and deduct money based on the bill of collecting fee includes:
prompting a user to acquire face information by voice, and further acquiring the face information;
and carrying out payment verification based on the collected face information, and in response to the fact that the collected face information is matched with the face information corresponding to the user identifier corresponding to the payment bill, executing the main body to determine that the verification is passed, and further carrying out payment deduction of corresponding payment amount based on the payment bill.
In the embodiment, the face information of the user is acquired in response to the detection of the face information of the user, so that the face information of the user is verified; in response to the verification passing, determining a user identifier corresponding to the face information of the user, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for the user to select; receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list; searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm; and responding to the name of the confirmed payment bill of the user and the corresponding payment amount, and skipping to a payment page to pay and deduct money based on the payment bill. Therefore, the user can complete self-service payment only by performing voice interaction with the terminal equipment through facial recognition and voice prompt without manual input in the whole self-service payment process. The realization is changed into through people and intelligent equipment's voice interaction and is assisted with facial recognition by the interactive change of characters through people and terminal equipment and can carry out self-service payment for whole journey of self-service payment need not artifical input characters, thereby makes self-service payment convenient, swift, simple and safety, makes old user or low cultural level's user also can enjoy the convenience of the bringing of paying fee on line.
Fig. 2 is a schematic main flow chart of a self-service payment method according to a second embodiment of the present application, and as shown in fig. 2, the self-service payment method includes:
step S201, in response to detecting the user face information, acquiring the user face information, and further verifying the user face information.
The principle of step S201 is similar to that of step S101, and is not described here.
Specifically, step S201 can also be implemented by steps S2011 to S2012:
step S2011, a face information base is called, and then based on a face recognition algorithm, the face information of the user is matched with each piece of face information in the face information base.
Step S2012, in response to determining that the face information in the face information base matches the user face information, determining that the verification is passed.
According to the embodiment, the collected face information of the user is verified, so that the next operation is carried out after the verification is passed, and the safety of self-service payment based on the AI face voice technology can be improved.
Step S202, responding to the verification passing, determining a user identification corresponding to the face information of the user, determining a payment project list according to the user identification, and broadcasting each payment project in the payment project list through voice for the user to select.
The principle of step S202 is similar to that of step S102, and is not described here.
Step S203, receiving the voice information of the user, and analyzing the voice information to determine the target payment item in the payment item list.
The principle of step S203 is similar to that of step S103, and is not described here.
Specifically, step S203 may also be implemented by step S2031:
step S2031, inputting the voice information into a pre-trained voice recognition model to generate corresponding text information.
The pre-trained speech recognition model is used for representing the corresponding relation between the speech information and the text information.
And step S204, searching a corresponding payment bill according to the target payment item, and then broadcasting the name of the payment bill and the corresponding payment amount through voice for confirmation of the user.
Step S205, in response to determining that the user confirms the name of the payment order and the corresponding payment amount, skipping to the payment page to pay and deduct money based on the payment order.
The principle of step S204 to step S205 is similar to that of step S104 to step S105, and is not described here again.
The embodiment of the application combines the service flow of payment, reasonably introduces Artificial Intelligence (AI) face and voice technology through intelligent equipment, completes links needing to verify user identity and password such as login and payment by face recognition, and can improve the safety of self-service payment by logging in and payment after matching the user identity. The voice technology is utilized to change the man-machine interaction mode, and the character interaction between a person and the terminal equipment is changed into the voice interaction between the person and the intelligent equipment.
AI: artificial Intelligence (Artificial Intelligence), abbreviated in english as AI. The method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Face recognition: is a biological identification technology for identifying the identity based on the face characteristic information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces. The voice technology comprises the following steps: the speech technology refers to key technologies in the computer field, such as automatic speech recognition technology (ASR) and speech synthesis technology (TTS). Speech Recognition technology, also known as Automatic Speech Recognition (ASR), aims at converting the vocabulary content in human Speech into computer-readable input, such as keystrokes, binary codes or character sequences. Unlike speaker recognition and speaker verification, the latter attempts to recognize or verify the speaker who uttered the speech rather than the vocabulary content contained therein. Speech synthesis is a technique for generating artificial speech by mechanical, electronic methods. TTS technology (also known as text-to-speech technology) belongs to speech synthesis, and is a technology for converting text information generated by a computer or input from the outside into intelligible and fluent chinese spoken language and outputting the same.
According to the method and the system, a self-service payment scene is used as a contact point, a face recognition technology is introduced, and the identity of the user is recognized by comparing the collected face with the user portrait stored in the system. In the login link, the traditional login mode of a user name and a password is replaced; in the payment link, the traditional payment mode of account number and password is replaced. And a voice technology is introduced to realize man-machine conversation and replace the traditional page operation mode. Logging in a system through face recognition; carrying out information interaction through man-machine conversation; and paying through the account number associated with the face recognition. Based on AI people's face and voice technology, with the help of self-service equipment or mobile terminal, carry out intelligent transformation to current flow of collecting fee, the human-computer interaction experience of full novelty does not have extra technical skill requirement, safer payment methods lets more users enjoy the convenience that the service of collecting fee brought on line for whole self-service flow of collecting fee is more intelligent, convenient.
Fig. 3 is a schematic view of an application scenario of a self-service payment method according to a third embodiment of the present application. The self-service payment method can be applied to self-service payment scenes. As shown in fig. 3, the self-service payment terminal 302 collects user face information 303 in response to detecting information of a user face 301, and then verifies the user face information 303. In response to the verification, the self-service payment terminal 302 determines the user identifier 304 corresponding to the user face information 303, so as to determine the payment item list 305 according to the user identifier 304, and voice-report payment items (payment item 1, payment item 2, …, and payment item n) in the payment item list 305 for the user 306 to select. The self-service payment terminal 302 receives the voice message 307 of the user 306, and parses the voice message 307 to determine the target payment item 308 in the payment item list 305. The self-service payment terminal 302 searches for a corresponding payment bill 309 according to the target payment item 308, and further broadcasts the name of the payment bill 309 and the corresponding payment amount to the user 306 through voice for the user 306 to confirm. In response to determining that the user 306 confirms the name of the payment bill 309 and the corresponding payment amount, the self-service payment terminal 302 jumps to the payment page, and acquires the information of the user face 310 again, thereby performing payment deduction based on the payment bill 309. The execution subject in the present application may be a processor in the self-service payment terminal 302 or a server in communication connection with the self-service payment terminal. The self-service payment terminal 302 completes the processing of the self-service payment related information by performing information interaction with the processor or the server.
By way of example, the overall process of the self-service payment method of the application is as follows:
(1) the camera equipment collects face information of a user;
(2) the user interface calls a data receiving module to transmit data to the background;
(3) the data receiving module receives the acquired data and analyzes the data;
(4) the face recognition algorithm module analyzes the face information and compares the face information with a user portrait stored in the system, and login operation is performed if the user information can be recognized;
(5) the method comprises the steps that equipment collects voice information of a user;
(6) the data receiving module receives the acquired data and analyzes the data;
(7) the voice recognition algorithm module analyzes the voice information and analyzes the meaning of the voice information;
(8) making a response according to the analyzed voice information, and repeating the steps 5, 6 and 7 (remark: according to the payment process, the current recognizable voice process comprises analyzing the payment item, inquiring the payment bill and paying);
(9) entering a payment link, and acquiring user face information by equipment;
(10) the user interface calls a data receiving module to transmit data to the background;
(11) the data receiving module receives the acquired data and analyzes the data;
(12) the face recognition algorithm module analyzes the face information, compares the face information with a user portrait stored in the system, and transmits related payment information to the back end for payment;
(13) and feeding back the payment result to the user.
The method and the device take a scene as a contact point, based on an AI face voice technology, match a user portrait of a payment platform with collected face information by means of self-service equipment or a mobile terminal, recognize the identity of the user, automatically display an order to be paid of the user, automatically find a newly-added payment service, simultaneously send a voice prompt to the user, complete the selection of a payment project according to the conversation with the user, confirm the information of the payment order, and then match an account bound by the user to automatically complete payment. Therefore, a brand-new man-machine interaction mode is realized, novel experience is brought to a user, and the user habit is changed. The face recognition has higher safety and rapidness as an identity verification means. The introduction of voice technology is more friendly to the elderly and users with low cultural levels.
Fig. 4 is a schematic diagram of main modules of a self-service payment device according to an embodiment of the application. As shown in fig. 4, the self-service payment device includes an information acquisition unit 401, a payment item list determination unit 402, a target payment item determination unit 403, a payment order search unit 404, and a payment unit 405.
An information acquisition unit 401 configured to, in response to detection of the user face information, acquire the user face information and further verify the user face information. Specifically, the information acquisition unit 401 may include a data acquisition module, and the data acquisition module may include a face information acquisition module and a voice information acquisition module. The data acquisition module can be based on a self-service or intelligent equipment camera and acquires the face data of the user; based on self-service or smart machine microphone, gather user's speech information.
And a payment item list determining unit 402 configured to determine, in response to the verification passing, a user identifier corresponding to the user face information, to determine a payment item list according to the user identifier, and to voice-broadcast each payment item in the payment item list for selection by the user.
And a target payment item determination unit 403 configured to receive voice information of the user, and parse the voice information to determine a target payment item in the payment item list.
The payment bill searching unit 404 is configured to search the corresponding payment bill according to the target payment item, so as to voice-report the name of the payment bill and the corresponding payment amount for the user to confirm.
A payment unit 405 configured to jump to a payment page to make a payment deduction based on the payment order in response to determining that the user confirms the name of the payment order and the corresponding payment amount.
In some embodiments, the information acquisition unit 401 is further configured to: calling a face information base, and matching the face information of the user with each piece of face information in the face information base based on a face recognition algorithm; and determining that the verification is passed in response to determining that the face information in the face information base is matched with the face information of the user.
In some embodiments, the target payment item determination unit 403 is further configured to: and inputting the voice information into a pre-trained voice recognition model to generate corresponding text information.
In some embodiments, a pre-trained speech recognition model is used to characterize the correspondence of speech information to text information.
In some embodiments, the target payment item determination unit 403 is further configured to: determining an accent identifier corresponding to the voice information; calling a voice information base corresponding to the accent identifier, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base; and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
In some embodiments, the target payment item determination unit 403 is further configured to: determining language identification corresponding to the voice information; calling a voice information base corresponding to the language identification, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base; and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
In some embodiments, the target payment item determination unit 403 is further configured to: and matching the text information with each payment item in the payment item list to determine the corresponding payment item, and further determining the corresponding payment item as a target payment item.
In some embodiments, the payment unit 405 is further configured to: prompting a user to acquire face information by voice, and further acquiring the face information; and carrying out payment verification based on the collected face information, responding to the fact that the collected face information is matched with the face information corresponding to the user identification, determining that the verification is passed, and then carrying out payment deduction of corresponding payment amount based on the payment bill.
In some embodiments, the payment item list determination unit 402 is further configured to: determining each payment item associated with the user identifier; and generating a payment item list according to each payment item.
In some embodiments, the self-service payment device further comprises a user interface module configured as an interface for a user to interact with the system, and the collected data is transmitted to the system background.
In some embodiments, the self-service payment device further comprises a data receiving module configured to convert the collected face and voice information into data information recognized by a system and call corresponding algorithms according to different information.
In some embodiments, the self-service payment device further includes an algorithm module configured to be a face recognition algorithm module and a voice recognition algorithm module, the face recognition algorithm module is configured to analyze the collected face and compare the face with the user portrait stored in the system, and the voice recognition algorithm module analyzes the collected voice information and analyzes the expressed meaning.
In some embodiments, the self-service payment device further comprises a request response module configured to respond according to the calculation result of the algorithm module.
It should be noted that, in the present application, the self-service payment method and the self-service payment device have a corresponding relationship in specific implementation contents, and therefore repeated contents are not described again.
Fig. 5 shows an exemplary system architecture 500 to which the self-service payment method or the self-service payment apparatus according to the embodiment of the present application may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 serves to provide a medium for communication links between the terminal devices 501, 502, 503 and the server 505. Network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 501, 502, 503 to interact with a server 505 over a network 504 to receive or send messages or the like. The terminal devices 501, 502, 503 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be various electronic devices that are self-service payment processing screens and support web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (for example only) providing support for user face information submitted by the user using the terminal devices 501, 502, 503. The background management server can respond to the detected user face information, collect the user face information and then verify the user face information; in response to the verification passing, determining a user identifier corresponding to the face information of the user, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for the user to select; receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list; searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm; and responding to the name of the confirmed payment bill of the user and the corresponding payment amount, and skipping to a payment page to pay and deduct money based on the payment bill. Therefore, the user can complete self-service payment only by performing voice interaction with the terminal equipment through facial recognition and voice prompt without manual input in the whole self-service payment process. The realization is changed into through people and intelligent equipment's voice interaction and is assisted with facial recognition by the interactive change of characters through people and terminal equipment and can carry out self-service payment for whole journey of self-service payment need not artifical input characters, thereby makes self-service payment convenient, swift, simple and safety, makes old user or low cultural level's user also can enjoy the convenience of the bringing of paying fee on line.
It should be noted that the self-service payment method provided in the embodiment of the present application is generally executed by the server 505, and accordingly, the self-service payment device is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks, and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device of an embodiment of the present application. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the computer system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a signal processing section such as a Cathode Ray Tube (CRT), a liquid crystal credit authorization inquiry processor (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments disclosed herein, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments disclosed herein include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises an information acquisition unit, a payment item list determination unit, a target payment item determination unit, a payment bill searching unit and a payment unit. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to collect user face information in response to detecting the user face information, and further verify the user face information; in response to the verification passing, determining a user identifier corresponding to the face information of the user, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for the user to select; receiving voice information of a user, and analyzing the voice information to determine a target payment item in a payment item list; searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm; and responding to the name of the confirmed payment bill of the user and the corresponding payment amount, and skipping to a payment page to pay and deduct money based on the payment bill.
According to the technical scheme of the embodiment of the application, the user can complete self-service payment only by carrying out voice interaction with the terminal equipment through facial recognition and voice prompt without manual input in the whole process of self-service payment. The realization is changed into through people and intelligent equipment's voice interaction and is assisted with facial recognition by the interactive change of characters through people and terminal equipment and can carry out self-service payment for whole journey of self-service payment need not artifical input characters, thereby makes self-service payment convenient, swift, simple and safety, makes old user or low cultural level's user also can enjoy the convenience of the bringing of paying fee on line.
The above-described embodiments should not be construed as limiting the scope of the present application. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A self-service payment method is characterized by comprising the following steps:
in response to the detection of the face information of the user, acquiring the face information of the user, and further verifying the face information of the user;
in response to the verification passing, determining a user identifier corresponding to the user face information, determining a payment item list according to the user identifier, and broadcasting each payment item in the payment item list in a voice mode for a user to select;
receiving voice information of a user, and analyzing the voice information to determine a target payment item in the payment item list;
searching a corresponding payment bill according to the target payment project, and further broadcasting the name of the payment bill and the corresponding payment amount in a voice mode for a user to confirm;
and responding to the name of the payment bill and the corresponding payment amount confirmed by the user, and skipping to a payment page to pay and deduct money based on the payment bill.
2. The method of claim 1, wherein the verifying the user face information comprises:
calling a face information base, and matching the face information of the user with each piece of face information in the face information base based on a face recognition algorithm;
and determining that the verification is passed in response to determining that the face information exists in the face information base and is matched with the user face information.
3. The method of claim 1, wherein parsing the voice information comprises:
and inputting the voice information into a pre-trained voice recognition model to generate corresponding text information.
4. The method of claim 3, wherein the pre-trained speech recognition model is used to characterize a correspondence of speech information to text information.
5. The method of claim 1, wherein parsing the voice information comprises:
determining an accent identifier corresponding to the voice information;
calling a voice information base corresponding to the accent identifier, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
6. The method of claim 1, wherein parsing the voice information comprises:
determining language identification corresponding to the voice information;
calling a voice information base corresponding to the language identification, and further calling a voice recognition algorithm to determine the similarity between the voice information and each voice information in the voice information base;
and determining target voice information in the voice information base according to the similarity, and further determining text information corresponding to the target voice information in the voice information base to be determined as an analysis result.
7. The method according to claim 3, 5 or 6, wherein the determining target payment items in the payment item list comprises:
and matching the text information with each payment item in the payment item list to determine the corresponding payment item, and further determining the corresponding payment item as a target payment item.
8. The method of claim 1, wherein the making a payment deduction based on the payment order comprises:
prompting a user to acquire face information by voice, and further acquiring the face information;
and carrying out payment verification based on the collected face information, responding to the fact that the collected face information is matched with the face information corresponding to the user identification, determining that the verification is passed, and then carrying out payment deduction of corresponding payment amount based on the payment bill.
9. The method of claim 1, wherein determining a list of payment items according to the user identifier comprises:
determining each payment item associated with the user identifier;
and generating a payment item list according to the payment items.
10. A self-service payment device, comprising:
the information acquisition unit is configured to respond to the detection of user face information, acquire the user face information and then verify the user face information;
the payment item list determining unit is configured to determine a user identifier corresponding to the user face information in response to the verification passing, determine a payment item list according to the user identifier, and voice-broadcast each payment item in the payment item list for selection of a user;
the target payment item determining unit is configured to receive voice information of a user, and analyze the voice information to determine a target payment item in the payment item list;
the payment bill searching unit is configured to search a corresponding payment bill according to the target payment item, and further voice-broadcast the name of the payment bill and the corresponding payment amount for confirmation of a user;
and the payment unit is configured to jump to a payment page to pay and deduct money based on the payment bill in response to the fact that the name of the payment bill and the corresponding payment amount are confirmed by the user.
11. The utility model provides a self-service payment electronic equipment which characterized in that includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202110739160.8A 2021-06-30 2021-06-30 Self-service payment method and device Pending CN113470278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110739160.8A CN113470278A (en) 2021-06-30 2021-06-30 Self-service payment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110739160.8A CN113470278A (en) 2021-06-30 2021-06-30 Self-service payment method and device

Publications (1)

Publication Number Publication Date
CN113470278A true CN113470278A (en) 2021-10-01

Family

ID=77876688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110739160.8A Pending CN113470278A (en) 2021-06-30 2021-06-30 Self-service payment method and device

Country Status (1)

Country Link
CN (1) CN113470278A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706137A (en) * 2021-10-21 2021-11-26 国网汇通金财(北京)信息科技有限公司 Data processing method and system applied to payment information
CN115881096A (en) * 2023-02-22 2023-03-31 翌飞锐特电子商务(北京)有限公司 Intelligent account checking method and system
CN117253318A (en) * 2023-08-22 2023-12-19 北京国旺盛源智能终端科技有限公司 Intelligent self-service payment terminal system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130110511A1 (en) * 2011-10-31 2013-05-02 Telcordia Technologies, Inc. System, Method and Program for Customized Voice Communication
CN104391673A (en) * 2014-11-20 2015-03-04 百度在线网络技术(北京)有限公司 Voice interaction method and voice interaction device
CN109545344A (en) * 2018-10-31 2019-03-29 平安医疗健康管理股份有限公司 Medical fee payment method, medical payment terminal and medical institution server
CN109686362A (en) * 2019-01-02 2019-04-26 百度在线网络技术(北京)有限公司 Voice broadcast method, device and computer readable storage medium
CN110444200A (en) * 2018-05-04 2019-11-12 北京京东尚科信息技术有限公司 Information processing method, electronic equipment, server, computer system and medium
CN111626726A (en) * 2019-02-28 2020-09-04 百度在线网络技术(北京)有限公司 Life payment method, device, equipment and storage medium
CN111986651A (en) * 2020-09-02 2020-11-24 上海优扬新媒信息技术有限公司 Man-machine interaction method and device and intelligent interaction terminal
CN112349276A (en) * 2020-12-03 2021-02-09 恒大新能源汽车投资控股集团有限公司 Vehicle-mounted voice interaction method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130110511A1 (en) * 2011-10-31 2013-05-02 Telcordia Technologies, Inc. System, Method and Program for Customized Voice Communication
CN104391673A (en) * 2014-11-20 2015-03-04 百度在线网络技术(北京)有限公司 Voice interaction method and voice interaction device
CN110444200A (en) * 2018-05-04 2019-11-12 北京京东尚科信息技术有限公司 Information processing method, electronic equipment, server, computer system and medium
CN109545344A (en) * 2018-10-31 2019-03-29 平安医疗健康管理股份有限公司 Medical fee payment method, medical payment terminal and medical institution server
CN109686362A (en) * 2019-01-02 2019-04-26 百度在线网络技术(北京)有限公司 Voice broadcast method, device and computer readable storage medium
CN111626726A (en) * 2019-02-28 2020-09-04 百度在线网络技术(北京)有限公司 Life payment method, device, equipment and storage medium
CN111986651A (en) * 2020-09-02 2020-11-24 上海优扬新媒信息技术有限公司 Man-machine interaction method and device and intelligent interaction terminal
CN112349276A (en) * 2020-12-03 2021-02-09 恒大新能源汽车投资控股集团有限公司 Vehicle-mounted voice interaction method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706137A (en) * 2021-10-21 2021-11-26 国网汇通金财(北京)信息科技有限公司 Data processing method and system applied to payment information
CN113706137B (en) * 2021-10-21 2022-04-01 国网汇通金财(北京)信息科技有限公司 Data processing method and system applied to payment information
CN115881096A (en) * 2023-02-22 2023-03-31 翌飞锐特电子商务(北京)有限公司 Intelligent account checking method and system
CN117253318A (en) * 2023-08-22 2023-12-19 北京国旺盛源智能终端科技有限公司 Intelligent self-service payment terminal system and method
CN117253318B (en) * 2023-08-22 2024-03-29 北京国旺盛源智能终端科技有限公司 Intelligent self-service payment terminal system and method

Similar Documents

Publication Publication Date Title
US10824874B2 (en) Method and apparatus for processing video
US10832686B2 (en) Method and apparatus for pushing information
US10978047B2 (en) Method and apparatus for recognizing speech
KR102023842B1 (en) Multimedia content playback method and apparatus
CN107833574B (en) Method and apparatus for providing voice service
CN113470278A (en) Self-service payment method and device
CN106373575B (en) User voiceprint model construction method, device and system
EP3451328B1 (en) Method and apparatus for verifying information
US11127399B2 (en) Method and apparatus for pushing information
CN105931644A (en) Voice recognition method and mobile terminal
CN103679452A (en) Payment authentication method, device thereof and system thereof
KR101850026B1 (en) Personalized advertisment device based on speech recognition sms service, and personalized advertisment exposure method based on speech recognition sms service
CN107943914A (en) Voice information processing method and device
CN109462482B (en) Voiceprint recognition method, voiceprint recognition device, electronic equipment and computer readable storage medium
CN109582825B (en) Method and apparatus for generating information
CN111583931A (en) Service data processing method and device
CN112669842A (en) Man-machine conversation control method, device, computer equipment and storage medium
CN108877779A (en) Method and apparatus for detecting voice tail point
CN111142834A (en) Service processing method and system
EP3843090B1 (en) Method and apparatus for outputting analysis abnormality information in spoken language understanding
CN112837672A (en) Method and device for determining conversation affiliation, electronic equipment and storage medium
CN114067842B (en) Customer satisfaction degree identification method and device, storage medium and electronic equipment
CN115602160A (en) Service handling method and device based on voice recognition and electronic equipment
CN106371905B (en) Application program operation method and device and server
CN110765242A (en) Method, device and system for providing customer service information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211001

RJ01 Rejection of invention patent application after publication