CN115437500A - Service handling method and device based on gesture interaction and self-service equipment - Google Patents

Service handling method and device based on gesture interaction and self-service equipment Download PDF

Info

Publication number
CN115437500A
CN115437500A CN202211070526.8A CN202211070526A CN115437500A CN 115437500 A CN115437500 A CN 115437500A CN 202211070526 A CN202211070526 A CN 202211070526A CN 115437500 A CN115437500 A CN 115437500A
Authority
CN
China
Prior art keywords
service
gesture
client
handling
inquiry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211070526.8A
Other languages
Chinese (zh)
Inventor
宋雨
李敬文
杨晓明
程璐
黄康
陈欢
赵辉
柏莹
王舒倩
程轼博
简苡霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202211070526.8A priority Critical patent/CN115437500A/en
Publication of CN115437500A publication Critical patent/CN115437500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a business handling method, a device, self-service equipment and a storage medium based on dynamic gesture interaction, which can be applied to multiple fields of artificial intelligence, cloud computing, big data and the like or the financial field.

Description

Service handling method and device based on gesture interaction and self-service equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a service handling method and device based on gesture interaction and self-service equipment.
Background
With the development of the voice recognition technology, the method is widely applied to electronic equipment in various fields, meets the application requirements of user voice interaction, voice input and the like, and improves the use convenience of the electronic equipment. If in the banking business handling scene, in order to improve the business handling efficiency, save the labor cost and the like, self-service equipment can be configured at a handling website, and an auxiliary client can complete business consultation and handling service rapidly in a voice interaction mode, so that the client experience is improved.
However, the existing self-service equipment supporting the voice interaction function is not suitable for the language-handicapped client, and needs business workers to communicate with the language-handicapped client in person to assist the language-handicapped client to handle business. However, most business workers do not know the sign language, so that the business handling requirements of the language-handicapped client are difficult to understand accurately, the communication efficiency between the language-handicapped client and the business workers is reduced, and the business handling efficiency is reduced
Disclosure of Invention
In order to solve the above problem, the embodiments of the present application provide the following technical solutions:
in one aspect, the present application provides a service handling method based on gesture interaction, where the method includes:
responding to a service mode switching instruction aiming at a client requesting to handle the service, triggering the self-service equipment to enter a gesture interaction service mode, and outputting a service handling inquiry gesture;
acquiring a service handling reply gesture input by the client based on the service handling inquiry gesture;
and inquiring a pre-constructed service knowledge base based on the service transacting reply gesture, and outputting a service transacting guide gesture aiming at the service requested by the client to transact the service so as to guide the client to transact the service in a gesture interaction mode.
Optionally, the responding to the service mode switching instruction for the service client requesting to handle the service includes:
acquiring identity recognition information of a client requesting to handle a service, determining that the identity recognition information meets a gesture interaction condition, and responding to a service mode switching instruction of a gesture interaction service mode aiming at self-service equipment; alternatively, the first and second electrodes may be,
and detecting a wakeup action pre-configured for a gesture interaction service mode of the self-service equipment, and responding to a service mode switching instruction for the gesture interaction service mode.
Optionally, the obtaining identity identification information of the client requesting service handling and determining that the identity identification information meets the gesture interaction condition include:
acquiring a face image of a client requesting to transact a service;
and performing face recognition on the face image, and determining that the client is marked as a language barrier or transacts business by using a gesture interaction mode according to client marking information stored in a business system.
Optionally, the obtaining identity identification information of the client requesting service handling and determining that the identity identification information meets the gesture interaction condition include:
acquiring video data of a business handling area of self-service equipment;
analyzing the video data, determining that a client enters the service handling area, and outputting a specific voice sentence;
acquiring reaction video data of the client aiming at the specific voice statement;
and performing audio and body action recognition on the reaction video data, and determining that the gesture interaction condition is met.
Optionally, the outputting a service handling inquiry gesture includes:
acquiring historical interaction information of the client on a service system;
analyzing the historical interaction information to obtain the predicted service requested to be transacted by the client and the predicted operation of the predicted service;
constructing service inquiry information aiming at the predicted service and operation inquiry information aiming at the predicted operation;
converting the service inquiry information into a service inquiry gesture and outputting the service inquiry gesture;
and converting the operation inquiry information into an operation inquiry gesture and outputting the operation inquiry gesture.
Optionally, the obtaining a service transaction reply gesture input by the client based on the service transaction inquiry gesture includes:
acquiring a business reply gesture input by the client based on the business query gesture and an operation reply gesture input by the client based on the operation query gesture;
the querying a service knowledge base constructed in advance based on the service transacting reply gesture, and outputting a service transacting guidance gesture for transacting a service requested by the client, includes:
respectively performing semantic recognition on the service reply gesture and the operation reply gesture, and merging recognized semantic information to obtain transacted service reply information;
inquiring a pre-constructed service knowledge base to obtain service handling guide information matched with the service handling reply information;
and converting the business handling guide information into a business handling guide gesture and outputting the business handling guide gesture.
Optionally, the output service handling inquiry gesture includes:
acquiring a specific voice sentence preconfigured by a service system, converting the specific voice sentence into a service handling inquiry gesture and outputting the service handling inquiry gesture; alternatively, the first and second liquid crystal display panels may be,
and calling a service handling inquiry gesture which is pre-configured according to the gesture interaction service mode, and outputting the service handling inquiry gesture.
Optionally, the method for constructing the service knowledge base includes:
receiving a specific service gesture sent by a service client; the specific business gesture is input by a client for a specific business;
acquiring service gestures corresponding to various services supported by a service system based on a gesture database or a gesture conversion model;
associating the received specific service gesture and the acquired service gesture corresponding to each service with a service identifier of the corresponding service to construct a service knowledge base;
the specific service gesture in the service knowledge base is associated with an identity of the client who inputs the specific service gesture; and at least one service gesture is associated with the same service identifier.
In another aspect, the present application further provides a service handling apparatus based on gesture interaction, where the apparatus includes:
the service mode switching module is used for responding to a service mode switching instruction aiming at a client requesting to handle the service and triggering the self-service equipment to enter a gesture interaction service mode;
the business handling inquiry gesture output module is used for outputting business handling inquiry gestures;
a service handling reply gesture acquisition module, configured to acquire a service handling reply gesture input by the client based on the service handling inquiry gesture;
and the service handling guiding gesture output module is used for inquiring a pre-constructed service knowledge base based on the service handling reply gesture, and outputting a service handling guiding gesture aiming at the service handling request of the client so as to guide the client to handle the service in a gesture interaction mode.
In another aspect, the present application further provides a self-service device, where the self-service device includes:
an audio and video collector; a display; a communication module;
a memory for storing a program for implementing the gesture interaction based service handling method as described above;
and the processor is used for loading and executing the program stored in the memory to realize the service handling method based on the gesture interaction.
Therefore, the gesture interaction service mode is configured in the self-service equipment to meet the business handling requirements of the language barrier client, specifically, the service mode switching instruction for the business handling request client can be responded, the self-service equipment is triggered to enter the gesture interaction service mode, the business handling inquiry gesture can be output, and the business which the client wants to handle is actively inquired, so that the client can input the business handling reply gesture according to the gesture, the business which the client wants to handle is informed to the self-service equipment in the gesture interaction mode, the business handling guide gesture for the business handling request service of the client is obtained and output by inquiring the pre-constructed business knowledge base, the client is guided to handle the business quickly and accurately, the business handling efficiency is improved, the business staff does not need to learn the sign language to guide the language barrier client to handle the business, and the labor cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an alternative example of a gesture interaction based service transaction method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating yet another alternative example of a gesture interaction based business transaction method proposed in the present application;
FIG. 3 is a schematic flowchart illustrating another alternative example of a gesture interaction based transaction method according to the present application;
FIG. 4 is a schematic flow chart diagram illustrating yet another alternative example of a gesture interaction based transaction method proposed in the present application;
FIG. 5 is a schematic structural diagram of an alternative example of the gesture interaction based transaction apparatus proposed in the present application;
FIG. 6 is a schematic diagram of a hardware configuration of an alternative example of a self-service device suitable for use in the gesture interaction based business transaction method proposed in the present application;
FIG. 7 is a schematic diagram of an alternative example of a gesture interaction based business transaction system suitable for use in the present application.
Detailed Description
Aiming at the description content of the background technology part, the self-service equipment is expected to support a voice interaction function, so that a client can conveniently handle services by direct voice, can support a dynamic gesture interaction function, can recognize the gestures of the client and accurately recognize the meaning of the sign language, and can also answer in a sign language mode or a voice or text mode, so that the client with language disorder can also use the self-service equipment to complete service consultation and handling, the service handling efficiency and experience of the client with language disorder are improved, and the skill requirement on service workers is reduced.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, a schematic flow chart of an optional example of a gesture interaction-based business handling method provided by the present application is shown, where the method may include a self-service device, such as an intelligent counter of a bank outlet, a business self-service device, and the like, in order to enrich business handling manners of customers and increase business handling speed, the self-service device supports multiple service modes, such as a voice interaction service mode, a touch service mode, a text interaction service mode, a gesture interaction service mode, and the like, and in an actual business handling application, a service mode to be used may be flexibly selected according to an actual situation. It should be noted that, in order to implement these multiple service modes, the self-service device is configured with corresponding software and hardware structures, and the embodiments of the present application are not described in detail herein.
As shown in fig. 1, the service transaction method based on gesture interaction proposed by the present embodiment may include, but is not limited to, the following steps:
step S11, responding to a service mode switching instruction aiming at a service client requesting to handle the service, triggering the self-service equipment to enter a gesture interaction service mode, and outputting a service handling inquiry gesture;
when a language-handicapped client uses the self-service equipment to handle services, the client cannot use a voice assistant of the self-service equipment to handle the services, and needs to switch the self-service equipment to a gesture interaction service mode for fast handling. For the self-service device, the self-service device can respond to a service mode switching instruction which is generated in a targeted manner based on the awakening action or the triggering operation so as to trigger the self-service device to enter a gesture interaction service mode.
Optionally, the self-service device may also identify the identity of the client requesting to transact the service in a face recognition manner, and then determine whether the client is a language barrier to be labeled; or an image recognition mode is adopted to recognize the action input by the client, whether the recognized action is a specific gesture which is preset to represent that the client is a language barrier or not is judged, whether the client requesting to handle the service at present belongs to the language barrier or not is judged, namely the self-service equipment is required to enter a gesture interaction service mode, the service requested by the client is completed, and if yes, a service mode switching instruction aiming at the gesture interaction service mode of the self-service equipment can be responded to, so that the self-service equipment is triggered to enter the gesture interaction service mode; otherwise, the self-service device may maintain a default service mode, such as a voice interaction service mode, a touch service mode, and the like. The generation method of the service mode switching instruction is not limited, and may be determined according to the circumstances.
After the self-service equipment enters the gesture interaction service mode, the service handling inquiry gesture can be directly output, namely, the client is actively inquired about what service the client wants to handle in a gesture interaction mode to guide the client to express the service to be handled, particularly, the worried mood of the client who does not know how to operate can be relieved for the client who does not use the self-service equipment, and the client experience is improved.
It should be noted that, the present application does not limit the obtaining of the service handling inquiry gesture and the output mode thereof. If the service transaction inquiry gesture is displayed by the three-dimensional virtual human constructed based on the artificial intelligence technology, the gesture interaction interest is improved; and when the service handling inquiry gesture is output, the three-dimensional virtual human can be controlled to speak the corresponding service handling inquiry voice sentence, so that the client can read the lip language mode to know the service handling inquiry information, and the service handling requirements of language barrier people who do not know the standard sign language are met. The construction method of the three-dimensional virtual character and the implementation method for controlling the three-dimensional virtual character to display various gestures and speak various voice sentences are not described in detail in the application.
S12, acquiring a service handling reply gesture input by a client based on the service handling inquiry gesture;
the client can learn the business handling inquiry information expressed by the self-service equipment through watching the business handling inquiry gesture displayed by the self-service equipment through the content of a known gesture database, and then can make a corresponding business handling reply gesture in the acquisition range of a video collector of the self-service equipment according to the self business handling requirement to express the business content which the client wants to handle, so that the self-service equipment can acquire video data containing the business handling reply gesture, and the business handling reply gesture input by the client is obtained by performing gesture recognition on the video data. The method for recognizing the gesture from the image based on the image recognition algorithm is not described in detail in the application.
And S13, inquiring a pre-constructed service knowledge base based on the service handling reply gesture, and outputting a service handling guide gesture aiming at the service handling request of the client so as to guide the client to handle the service in a gesture interaction mode.
The service knowledge base may include, based on the gesture database, various sign language gestures and expression meanings thereof, determine service gestures corresponding to various services supported by the service system, or a client may enter a personalized specific service gesture for a selected specific service through a service client corresponding to the service system, for example, for the expression of a complex service, a sign language gesture (i.e., a gesture) understood by the client itself may be entered for expression, and a service expression manner is simplified, so that, in the case where the service system supports the setting of the personalized service knowledge base on the client service APP, the construction method of the service knowledge base may include, but is not limited to:
receiving a specific service gesture sent by a service client; the specific service gesture may be input by a client for a specific service, for example, the client logs in a service system through a service client, configures an interface for the service gesture output by the service client, displays various services of the service system on the interface, and then the client may select a specific service which the service gesture is to be expressed by, outputs a service gesture entry interface, prompts the client to enter a gesture expressing the specific service, and records the gesture as a specific service gesture, and the service client may report the obtained specific service gesture and an association relationship between the obtained specific service gesture and a specific service identifier to the service system to enrich a service knowledge base, but is not limited to this specific service gesture entry method.
In addition, for each service supported by the service system, the service gesture corresponding to each service supported by the service system may also be acquired based on the gesture database or the gesture conversion model, for example, the service information (e.g., the text or voice expression information of the service content) of each service is respectively input into the gesture conversion mode, the corresponding service gesture is automatically generated, or the service information and each gesture recorded in the gesture database are matched with the expression meaning thereof for query, so as to determine the corresponding service gesture, thereby improving the service gesture acquisition efficiency compared with a mode in which a client inputs the service gesture one by one. However, the method for acquiring the service gesture of each service supported by the service system is not limited in the present application.
Then, the received specific service gesture and the acquired service gesture corresponding to each service can be associated with the service identifier corresponding to the service, and a service knowledge base is constructed accordingly. It can be seen that the specific service gestures in the service knowledge base are associated with the identities of the clients who input the specific service gestures, that is, the specific service identities corresponding to any specific service, with one or more specific service gestures expressing the specific service, and the client identities of the clients who enter the specific gestures. Therefore, the same service identifier is associated with at least one service gesture, such as a general service gesture, a personalized service gesture input by one or more clients, and the like.
In summary, in the embodiment of the present application, the self-service device may respond to the service mode switching instruction for the client requesting to transact the service, and trigger the self-service device to enter the gesture interaction service mode, so as to meet the service transaction requirements of the language barrier client. And then, the self-service equipment can also output a service handling inquiry gesture to actively inquire the service which the client wants to handle, guide the client to input a service handling reply gesture according to the service handling inquiry gesture, and timely inform the self-service equipment of the service which the client wants to handle, so that the service handling experience of the client who cannot use the self-service equipment is improved. The self-service equipment can obtain a service handling guiding gesture aiming at a service requested by a client and then output the service handling guiding gesture by inquiring a service knowledge base which is constructed in advance, and the client is guided to handle the service quickly and accurately by the gesture interaction mode, so that the service handling efficiency is improved.
Referring to fig. 2, which is a schematic flowchart of another optional example of the service transaction method based on gesture interaction proposed in the present application, this embodiment may describe an optional detailed implementation manner of the service transaction method based on gesture interaction described above, and as shown in fig. 2, the method may include:
step S21, obtaining identity identification information of a client requesting to transact business;
step S22, determining that the identity identification information meets the gesture interaction condition, and generating a service mode switching instruction aiming at a gesture interaction service mode of the self-service equipment;
in the embodiment of the application, the identity identification information of the client entering the service handling area of the self-service equipment, namely the client requesting to handle the service, can be determined through face recognition, image recognition-based client reaction action, specific action recognition and other modes. Therefore, the identification information can be used for representing whether the client belongs to a language barrier or not so as to determine whether to enter a gesture interaction service mode or not, and the content of the identification information and the acquisition method thereof are not limited in the application.
Based on the above analysis, the gesture interaction condition may be a condition for identifying the client as a person with language disorder, or the client needs to handle the service in a gesture interaction manner, and the content of the preset client identification condition may be determined in combination with the above description content of the identification information, for example, the client is marked as a person with language disorder in the banking system in advance, and the corresponding client marking information is associated with the identification (such as a face image, an identification number, an account number, and the like) of the client, so that the client marking information (i.e., one kind of identification information) of the client may be obtained based on the face recognition result, and the content thereof is identified to determine whether the client is a person with language disorder, or the client is marked as a person with language disorder to handle the service in a gesture interaction manner.
Therefore, the above steps S21 and S22 may include: acquiring a face image of a client requesting to transact a service; and carrying out face recognition on the face image, determining that the client is marked as a language barrier according to the client marking information stored in the service system, and generating a service mode switching instruction aiming at a gesture interaction service mode of the self-service equipment. It should be noted that, the implementation methods of step S21 and step S22 include, but are not limited to, the above-described method, and reference may be made to other implementation manners described in the following corresponding parts, and details of this embodiment are not described in detail herein by way of example.
Step S23, responding to the service mode switching instruction, triggering the self-service equipment to enter a gesture interaction service mode, and outputting a service handling inquiry gesture;
in still other embodiments, the method for generating the service mode switching instruction may further include: and detecting a pre-configured awakening action aiming at a gesture interaction service mode of the self-service equipment, and generating a service mode switching instruction aiming at the gesture interaction service mode. The awakening action can be input by a client logging in a service system (namely, a personalized action is input through a service client to represent that the client needs to handle services in a gesture interaction mode, and the implementation process is not described in detail in the application), or the service system is configured by default, and the content and the configuration method of the awakening action are not limited in the application and can be determined according to the situation.
Based on this, when a client wants to use a gesture interaction mode to transact business with a self-service device, a service switching instruction for the gesture interaction service mode may be generated by making a pre-configured wake-up action that requires a gesture interaction service mode or making a default configured wake-up action of the self-service device within the acquisition range of the video acquirer of the self-service device, but the method is not limited to this implementation method.
Step S24, acquiring a service handling reply gesture input by the client based on the service handling inquiry gesture;
regarding how the self-service device enters the gesture interaction service mode, the implementation process of outputting the business handling inquiry gesture, and how to obtain the business handling reply gesture entered by the client, reference may be made to the description of the context corresponding part, and details of the embodiment of the present application are not described herein.
Step S25, performing semantic recognition on the transacted business reply gesture to obtain transacted business reply information;
in the embodiment of the application, semantic recognition may be performed on the service handling reply gesture based on a gesture database or a gesture conversion model (i.e., a gesture translation model), so as to obtain service handling reply information expressed by the service handling reply gesture, where the service handling reply information may be in a voice or text format, and the application does not limit how to implement gesture recognition.
Step S26, inquiring a pre-constructed service knowledge base to obtain service handling guide information matched with the service handling reply information;
and step S27, converting the service handling guiding information into a service handling guiding gesture and outputting the service handling guiding gesture.
In combination with the above description of the service knowledge base, the transacted service reply information and the service information recorded in the service knowledge base can be directly matched one by one, for example, matching is performed by a similarity algorithm, and the matched service information forms service transacting guide information, that is, service transacting flow information for guiding a client to transact a required service.
Then, the service handling guiding information can be converted based on a gesture database or a gesture conversion model to obtain a corresponding service handling guiding gesture, and then the service handling guiding gesture can be displayed through a pre-constructed three-dimensional virtual character, so that a language-handicapped client can know how to handle the required service, namely, the service handling process. It should be understood that the business handling process may be known through one or more gesture interaction methods, and each gesture interaction process may be implemented by referring to the method described above, and the embodiments of the present application are not described in detail herein.
Referring to fig. 3, which is a schematic flowchart of another optional example of the service transaction method based on gesture interaction proposed in the present application, this embodiment may describe another optional detailed implementation manner of the service transaction method based on gesture interaction described above, as shown in fig. 3, the method may include:
step S31, video data of a service handling area of the self-service equipment is obtained;
step S32, analyzing the video data, determining that a client enters the service handling area, and outputting a specific voice sentence;
step S33, acquiring the response video data of the client aiming at the specific voice statement;
step S34, performing audio and body action recognition on the reaction video data, determining that a gesture interaction condition is met, and triggering the self-service equipment to enter a gesture interaction service mode;
in the embodiment of the application, the self-service equipment can perform cold talk with the client by outputting specific voice sentences after determining that the client enters a service handling area of the self-service equipment in order to judge whether to enter a gesture interaction service mode or not and handle services in a gesture interaction mode, and determines whether to enter the gesture interaction service mode or not by checking the reaction condition of the client and helps the client to finish service handling.
Therefore, the method for monitoring whether a client enters a service handling area of the self-service equipment or not can be based on an image recognition technology, the implementation method is not described in detail, if the client exists, a preset specific voice sentence can be called and sent to an audio player to be played, and for example, specific voice sentences such as 'what you are good, what you need to handle,' what can help you 'welcome', 8230 ', (safe use prompt)' and the like are output, and the content of the specific voice sentence is not limited by the method.
For clients with different physical conditions, the responses to the specific voice sentences played by the self-service equipment are different, for example, for clients with normal language and hearing, the service handling requirements can be directly spoken, and the self-service equipment can be in a voice interaction service mode to guide the clients to complete service handling. However, for the language barrier, the self-service device cannot send out the audio of the effective content and can make high-frequency body actions, so that the video collector and the audio collector in the self-service device in a working state can collect the video data containing the content, and then the video data is analyzed to determine that the client has no audio feedback to the played specific voice statement, but the client can determine that the gesture interaction condition is met along with the high-frequency body actions, so that the language barrier help function is started, namely the self-service device is triggered to enter a gesture interaction service mode; otherwise, the client can communicate with the self-service device in normal voice to trigger the self-service device to enter the voice interaction service mode.
Step S35, converting the specific voice sentence into a service handling inquiry gesture, and outputting the service handling inquiry gesture;
step S36, acquiring a service handling reply gesture input by the client based on the service handling inquiry gesture;
and step S37, inquiring a pre-constructed service knowledge base based on the service handling reply gesture, and outputting a service handling guide gesture aiming at the service handling request of the client.
Regarding the implementation process of step S35 to step S37, reference may be made to the description of the context corresponding part, and this embodiment will not be described in detail here.
In summary, in the embodiment of the present application, when the self-service device detects that a client enters a service handling area, the self-service device may actively inquire that the client wants to handle a service by outputting a specific voice sentence, and may also detect whether the client may use a voice interaction service mode, and specifically, may analyze detected video data to determine a reaction condition of the client to the specific voice sentence, such as whether to feedback a reply voice, whether to make a high-frequency body motion, and the like, so as to control the self-service device to enter a suitable service mode, so that the client can handle the service in a voice interaction manner or a gesture interaction manner that can be used by the client, without the assistance of service staff for communication, instruct the client to handle the service on the self-service device, save labor cost, and better meet the service handling requirements of language-handicapped people.
Referring to fig. 4, which is a flowchart illustrating a further optional example of the service transaction method based on gesture interaction proposed in the present application, this embodiment may describe a further optional detailed implementation manner of the service transaction method based on gesture interaction described above, as shown in fig. 4, the method may include:
step S41, responding to a service mode switching instruction aiming at a client requesting to handle the service, and triggering the self-service equipment to enter a gesture interaction service mode;
step S42, acquiring historical interaction information of the client on a service system;
step S43, analyzing the historical interaction information to obtain the prediction service requested to be handled by the client and the prediction operation of the prediction service;
step S44, constructing service inquiry information aiming at the predicted service and operation inquiry information aiming at the predicted operation;
in order to further simplify the operation of the client and improve the experience of the client, the self-service device can predict the content of the business requested to be transacted by the client currently, such as the business type and the operation category thereof, the process can utilize a pre-trained business prediction model to analyze the historical interaction information of the client (such as the log data of the past transaction of the business on a business system, and the historical business type, the historical operation category, the historical transaction time and the like which are transacted can be included, the application does not limit the content of the historical interaction information and can be determined according to the situation), predict the business transaction intention of the client, predict the business requested to be transacted by the client currently and the operation thereof, and respectively mark the business transaction intention as a predicted business and a predicted operation. After that, the above step S44 may be executed according to a preset inquiry template, but is not limited to this construction manner, and the preset inquiry template configuration method and its content are not limited, and may be determined according to the circumstances.
Illustratively, the business handling intention of the client is predicted according to the method, the predicted business handling problem is obtained, and the predicted business handling problem can be split so as to be conveniently expressed by the language barrier client. If the problem that the customer wants to ask the credit card how to report loss is predicted, the problem can be divided into two parts, namely a predicted service and a reported loss, and then two predicted subproblems, namely 'what kind of service the customer wants to handle' and 'what operation the customer wants to do' can be constructed according to inquiry templates of different problems and are respectively recorded as service inquiry information and operation inquiry information, but the method is not limited to the problem construction method.
In still other embodiments, after the self-service device enters the gesture interaction mode, two sub-questions, i.e., the pre-split business query information and the operation query information, may also be directly invoked, or even more sub-questions may be invoked, as the case may be.
Step S45, converting the service inquiry information into a service inquiry gesture and outputting the service inquiry gesture, and converting the operation inquiry information into an operation inquiry gesture and outputting the operation inquiry gesture;
step S46, acquiring a service reply gesture input by the client based on the service inquiry gesture and an operation reply gesture input by the client based on the operation inquiry gesture;
in practical application, the self-service device may obtain each sub-question and then directly output the sub-question to obtain a response gesture of the customer for the sub-question, so that, after obtaining a service inquiry gesture by using a gesture database or a gesture conversion model, the self-service device may directly output the service inquiry gesture to obtain a service response gesture input by the customer. And then outputting an operation inquiry gesture, and making an operation reply gesture by the client, such as a service reply gesture for expressing the loss-of-report meaning. However, the method is not limited to this control method, and the client may also directly output the obtained service query gesture and operation query gesture, and then input the service reply gesture and operation reply gesture, and the implementation process of obtaining each gesture is not described in detail.
Therefore, the problem is divided into a plurality of sub-problems, and the client replies to each sub-problem, so that the difficulty of answering gesture expression is greatly simplified, and the business handling efficiency and the client experience are improved.
Step S47, respectively performing semantic recognition on the service reply gesture and the operation reply gesture, and merging recognized semantic information to obtain transacted service reply information;
in the embodiment of the application, collected reply contents for each subproblem can be combined, for example, the semantics of the recognized reply gesture are 'credit card' and 'loss report', and the collected reply contents can be combined into a reply, and 'credit card loss report', so that matched service transaction guide information, such as a transaction flow of credit card loss report and the like, can be obtained by querying a service knowledge base, and the implementation process is not described in detail in the application.
Step S48, inquiring a service knowledge base which is constructed in advance to obtain service handling guide information matched with the service handling reply information;
and step S49, converting the service handling guiding information into service handling guiding gestures to be output.
Regarding the implementation processes of step S48 and step S49, reference may be made to the description of the corresponding parts in the above embodiments, which is not described in detail in this embodiment.
Optionally, in the output process of each gesture, the three-dimensional virtual character can be controlled to synchronously speak the service handling guidance voice with the same content, so that the language barrier can also learn the service handling process by reading the lip language, thereby completing the service that the client needs to handle, and the implementation process is not described in detail in this application.
In still other embodiments, when the self-service device actively asks a client to request for business transaction, a specific voice statement configured in advance by a business system can also be obtained, and the specific voice statement is converted into a business transaction inquiry gesture and then output; or, a service handling inquiry gesture pre-configured for the gesture interaction service mode is called, and the service handling inquiry gesture is output, and the implementation process is not described in detail in the application. The service handling inquiry gesture can be obtained by converting preset service handling inquiry information according to standard sign language, and can also be pre-recorded by a client through a service client.
Referring to fig. 5, a schematic structural diagram of an alternative example of the service handling apparatus based on gesture interaction proposed in the present application, as shown in fig. 5, the apparatus may include:
the service mode switching module 51 is used for responding to a service mode switching instruction aiming at a client requesting to transact business and triggering the self-service equipment to enter a gesture interaction service mode;
a service handling inquiry gesture output module 52 for outputting a service handling inquiry gesture;
a service handling reply gesture obtaining module 53, configured to obtain a service handling reply gesture input by the client based on the service handling inquiry gesture;
a service handling guidance gesture output module 54, configured to query a pre-constructed service knowledge base based on the service handling reply gesture, and output a service handling guidance gesture for requesting to handle a service by the client, so as to guide the client to handle the service in a gesture interaction manner.
Optionally, the service mode switching module 51 may include:
the identity identification information acquisition unit is used for acquiring identity identification information of a client requesting service handling;
the gesture interaction condition determining unit is used for determining that the identity recognition information meets a gesture interaction condition;
the response unit is used for responding to a service mode switching instruction of a gesture interaction service mode aiming at the self-service equipment;
optionally, the service mode switching module 51 may also include:
and the awakening action detection unit is used for detecting the awakening action which is pre-configured aiming at the gesture interaction service mode of the self-service equipment and triggering the response unit to respond to the service mode switching instruction aiming at the gesture interaction service mode.
In some embodiments, the identification information obtaining unit and the gesture interaction condition determining unit may include:
the face image acquisition unit is used for acquiring a face image of a client requesting service handling;
the face recognition unit is used for carrying out face recognition on the face image;
and the first determining unit is used for determining that the client is marked as a language barrier or transacts business in a gesture interaction mode according to the face recognition result and the client marking information stored in the business system.
In still other embodiments, the identification information obtaining unit and the gesture interaction condition determining unit may further include:
the video data acquisition unit is used for acquiring video data of a service handling area of the self-service equipment;
a specific voice sentence output unit, configured to analyze the video data, determine that a client enters the service handling area, and output a specific voice sentence;
a reaction video data acquisition unit, configured to acquire reaction video data of the client for the specific voice statement;
and the second determining unit is used for performing audio and body motion recognition on the reaction video data and determining that the gesture interaction condition is met.
Based on the above description of the embodiments, the service-transacting inquiry gesture output module 52 may include:
a historical interactive information acquisition unit, configured to acquire historical interactive information of the client on a service system;
the prediction analysis unit is used for analyzing the historical interaction information to obtain the prediction service requested to be handled by the client and the prediction operation of the prediction service;
an inquiry information construction unit for constructing service inquiry information for the predicted service and operation inquiry information for the prediction operation;
the first conversion output unit is used for converting the service inquiry information into a service inquiry gesture and then outputting the service inquiry gesture;
and the second conversion output unit is used for converting the operation inquiry information into an operation inquiry gesture and then outputting the operation inquiry gesture.
Based on the above description, optionally, the above-mentioned transaction reply gesture obtaining module 53 may include:
a reply gesture acquisition unit for acquiring a service reply gesture input by the client based on the service inquiry gesture and an operation reply gesture input by the operation inquiry gesture;
accordingly, the service handling guidance gesture output module 54 may include:
the semantic recognition unit is used for performing semantic recognition on the service reply gesture and the operation reply gesture respectively;
the information merging processing unit is used for merging the identified semantic information to obtain the transacted service reply information;
the query unit is used for querying a service knowledge base which is constructed in advance to obtain service handling guide information matched with the service handling reply information;
and the third conversion output unit is used for converting the business handling guide information into a business handling guide gesture to be output.
In still other embodiments, the service-handling inquiry gesture output module 52 may also include:
the fourth conversion output unit is used for acquiring specific voice sentences pre-configured by the service system, converting the specific voice sentences into service handling inquiry gestures and outputting the service handling inquiry gestures; alternatively, the first and second liquid crystal display panels may be,
and the calling output unit is used for calling a service handling inquiry gesture which is pre-configured according to the gesture interaction service mode and outputting the service handling inquiry gesture.
In order to implement the construction of the service knowledge base, the service system may include:
the specific service gesture receiving module is used for receiving a specific service gesture sent by a service client; the specific business gesture is input by a client for a specific business;
the service gesture obtaining module is used for obtaining service gestures corresponding to various services supported by the service system based on a gesture database or a gesture conversion model;
the construction module is used for associating the received specific service gesture and the acquired service gesture corresponding to each service with a service identifier of the corresponding service to construct a service knowledge base;
the specific service gesture in the service knowledge base is associated with an identity of the client who inputs the specific service gesture; and at least one service gesture is associated with the same service identifier.
It should be noted that, various modules, units, and the like in the embodiments of the foregoing apparatuses may be stored in a memory as program modules, and the processor may execute the program modules stored in the memory to implement corresponding functions, or may be implemented by combining the program modules and hardware, and for the functions implemented by the program modules and the combinations thereof and the achieved technical effects, reference may be made to the description of corresponding parts in the embodiments of the foregoing methods, and this embodiment is not described again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and the computer program is loaded and executed by a processor to implement each step of the service handling method based on gesture interaction.
Referring to fig. 6, a schematic diagram of a hardware structure of an alternative example of a self-service device suitable for the gesture interaction based business transaction method proposed in the present application is shown, as shown in fig. 6, the self-service device may include, but is not limited to: audio video collector 61, display 62, communication module 63, memory 64 and processor 65, wherein:
the number of the audio/video collector 61, the display 62, the communication module 63, the memory 64 and the processor 65 may be at least one, and the audio/video collector 61, the display 62, the communication module 63, the memory 64 and the processor 65 may be connected to a communication bus, so that data interaction between each other and other structural components of the computer device is realized through the communication bus, which may be specifically determined according to actual requirements, which is not described in detail herein.
The audio/video collector 61 may include an audio collector and a video collector, and the audio collector may be configured to collect a voice language input by a client when the self-service device enters a voice interaction service mode; when the self-service equipment enters a gesture interaction service mode, in order to know whether the customer belongs to a language barrier, the audio collector can also be started to collect voice, and whether the customer speaks or feeds back an audio signal is determined.
The video collector can comprise one or more cameras, and is used for collecting images of clients entering a video collection area/service handling area of the self-service equipment to obtain video data containing continuous frame video images, so that the clients can be subjected to image analysis in the following process, body actions, gestures and the like of the clients can be recognized, and the implementation process is not detailed in the application.
The communication module 63 may include a communication module capable of implementing data interaction by using a wireless communication network, such as a WIFI module, a 5G/6G (fifth generation mobile communication network/sixth generation mobile communication network) module, a GPRS module, and the like, so in this embodiment of the present application, as shown in the system structure schematic diagram shown in fig. 7, the self-service device may communicate with a background service server (which is configured with the service system), so as to query client information through the background server, and even may send each collected gesture made by the client to the service server for conversion processing, query a service knowledge base, obtain corresponding service handling guidance information, convert the service handling guidance information into a service handling guidance gesture, and feed back the service handling guidance gesture to the self-service device for output. With regard to the communication process between the self-service device and the terminal device and the business server, reference may be made to the description of the corresponding parts of the above embodiments.
In addition, the communication module 63 may further include a communication interface for implementing data interaction between internal components of the self-service device, such as a USB interface, a serial/parallel port, an I/O port, and the like, for example, the processor receives video data acquired by the video collector, receives an audio signal acquired by the audio collector, and the like.
The memory 64 may be used for storing programs for implementing the gesture interaction based business transaction methods described in the above method embodiments; the processor 65 may load and execute the program stored in the memory to implement the steps of the service handling method based on gesture interaction described in the above corresponding method embodiment, and for a specific implementation process, reference may be made to the description of the corresponding parts in the above embodiment, which is not described again.
In the present embodiment, the memory 64 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device. The processor 65 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device.
It should be understood that the configuration of the self-service device shown in fig. 6 is not intended to limit the self-service device in the embodiments of the present application, and in practical applications, the self-service device may include more components than those shown in fig. 6, or some components may be combined, and the present application is not specifically described herein.
It should be noted that the service handling method and device based on gesture interaction and the self-service device provided by the application can be used in the fields of artificial intelligence, cloud computing, chips or finance. The above description is only an example, and does not limit the application fields of the gesture interaction based business handling method and apparatus and the self-service device provided by the present invention.
In the embodiments described above, the terms "a," "an," "the," and/or "the" are not intended to mean that a single element, but may include multiple elements, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising a component of ' 8230 ' \8230; ' does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
Reference herein to terms such as "first," "second," or the like, is used for descriptive purposes only and to distinguish one operation, element, or module from another operation, element, or module without necessarily requiring or implying any actual such relationship or order between such elements, operations, or modules. And are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated, whereby a feature defined as "first" or "second" may explicitly or implicitly include one or more of such features.
In addition, in the present specification, the embodiments are described in a progressive or parallel manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device, the computer device, the system and the storage medium disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether these functions are performed in hardware or software depends on the specific application of the solution and design pre-set conditions. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A service handling method based on gesture interaction is characterized by comprising the following steps:
responding to a service mode switching instruction aiming at a client requesting to handle the service, triggering the self-service equipment to enter a gesture interaction service mode, and outputting a service handling inquiry gesture;
acquiring a service handling reply gesture input by the client based on the service handling inquiry gesture;
and inquiring a pre-constructed service knowledge base based on the service transacting reply gesture, and outputting a service transacting guide gesture aiming at the service requested by the client to transact the service so as to guide the client to transact the service in a gesture interaction mode.
2. The method of claim 1, wherein responding to a service mode switch command for a requesting business client comprises:
acquiring identity recognition information of a client requesting to handle a service, determining that the identity recognition information meets a gesture interaction condition, and responding to a service mode switching instruction aiming at a gesture interaction service mode of self-service equipment; alternatively, the first and second liquid crystal display panels may be,
and detecting a wakeup action pre-configured for a gesture interaction service mode of the self-service equipment, and responding to a service mode switching instruction for the gesture interaction service mode.
3. The method of claim 2, wherein the obtaining identification information of the client requesting service handling and determining that the identification information satisfies a gesture interaction condition comprises:
acquiring a face image of a client requesting to handle a service;
and performing face recognition on the face image, and determining that the client is marked as a language barrier or transacts business by using a gesture interaction mode according to client marking information stored in a business system.
4. The method of claim 2, wherein the obtaining identification information of the client requesting service handling and determining that the identification information satisfies a gesture interaction condition comprises:
acquiring video data of a business handling area of self-service equipment;
analyzing the video data, determining that a client enters the service handling area, and outputting a specific voice sentence;
acquiring reaction video data of the client aiming at the specific voice statement;
and performing audio and body action recognition on the reaction video data, and determining that the gesture interaction condition is met.
5. The method of any of claims 1-4, wherein the output service transacting query gesture includes:
acquiring historical interaction information of the client on a service system;
analyzing the historical interaction information to obtain the prediction service requested to be transacted by the client and the prediction operation of the prediction service;
constructing service inquiry information aiming at the predicted service and operation inquiry information aiming at the predicted operation;
converting the service inquiry information into a service inquiry gesture and outputting the service inquiry gesture;
and converting the operation inquiry information into an operation inquiry gesture and outputting the operation inquiry gesture.
6. The method of claim 5, wherein the obtaining a transaction reply gesture entered by the client based on the transaction query gesture comprises:
acquiring a service reply gesture input by the client based on the service inquiry gesture and an operation reply gesture input by the client based on the operation inquiry gesture;
the inquiring of the pre-constructed service knowledge base based on the service reply gesture and the outputting of the service handling guiding gesture aiming at the service handling request of the client comprise:
performing semantic recognition on the service reply gesture and the operation reply gesture respectively, and merging recognized semantic information to obtain transacted service reply information;
inquiring a pre-constructed service knowledge base to obtain service handling guide information matched with the service handling reply information;
and converting the service handling guide information into a service handling guide gesture and outputting the service handling guide gesture.
7. The method of any of claims 1-4, wherein the output service transacting query gesture includes:
acquiring a specific voice sentence preconfigured by a service system, converting the specific voice sentence into a service handling inquiry gesture and outputting the service handling inquiry gesture; alternatively, the first and second liquid crystal display panels may be,
and calling a service handling inquiry gesture which is pre-configured according to the gesture interaction service mode, and outputting the service handling inquiry gesture.
8. The method according to any one of claims 1-4, wherein the construction method of the business knowledge base comprises:
receiving a specific service gesture sent by a service client; the specific business gesture is input by a client for a specific business;
acquiring service gestures corresponding to various services supported by a service system based on a gesture database or a gesture conversion model;
associating the received specific service gesture and the acquired service gesture corresponding to each service with a service identifier of the corresponding service to construct a service knowledge base;
the specific service gesture in the service knowledge base is associated with an identity of the client who inputs the specific service gesture; and at least one service gesture is associated with the same service identifier.
9. A gesture interaction based business handling apparatus, the apparatus comprising:
the service mode switching module is used for responding to a service mode switching instruction aiming at a client requesting to handle the service and triggering the self-service equipment to enter a gesture interaction service mode;
the service handling inquiry gesture output module is used for outputting a service handling inquiry gesture;
a service handling reply gesture acquisition module, configured to acquire a service handling reply gesture input by the client based on the service handling inquiry gesture;
and the service handling guiding gesture output module is used for inquiring a pre-constructed service knowledge base based on the service handling reply gesture, and outputting a service handling guiding gesture aiming at the service handling request of the client so as to guide the client to handle the service in a gesture interaction mode.
10. A self-service device, characterized in that the self-service device comprises:
an audio and video collector; a display; a communication module;
a memory for storing a program for implementing the gesture interaction based transaction method according to any one of claims 1-8;
a processor for loading and executing the program stored in the memory to implement the method for transacting business based on gesture interaction according to any one of claims 1 to 8.
CN202211070526.8A 2022-09-02 2022-09-02 Service handling method and device based on gesture interaction and self-service equipment Pending CN115437500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211070526.8A CN115437500A (en) 2022-09-02 2022-09-02 Service handling method and device based on gesture interaction and self-service equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211070526.8A CN115437500A (en) 2022-09-02 2022-09-02 Service handling method and device based on gesture interaction and self-service equipment

Publications (1)

Publication Number Publication Date
CN115437500A true CN115437500A (en) 2022-12-06

Family

ID=84246393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211070526.8A Pending CN115437500A (en) 2022-09-02 2022-09-02 Service handling method and device based on gesture interaction and self-service equipment

Country Status (1)

Country Link
CN (1) CN115437500A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350526A (en) * 2023-10-07 2024-01-05 交通银行股份有限公司广东省分行 Intelligent sorting method, device, equipment and storage medium for self-service items

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117350526A (en) * 2023-10-07 2024-01-05 交通银行股份有限公司广东省分行 Intelligent sorting method, device, equipment and storage medium for self-service items

Similar Documents

Publication Publication Date Title
CN112104899B (en) Information recommendation method and device in live broadcast, electronic equipment and storage medium
US9300672B2 (en) Managing user access to query results
JP6817467B2 (en) Resolving automated assistant requests based on images and / or other sensor data
KR20200007946A (en) Save metadata related to captured images
US20190221208A1 (en) Method, user interface, and device for audio-based emoji input
WO2019024692A1 (en) Speech input method and device, computer equipment and storage medium
US10540977B2 (en) Proximity-based engagement with digital assistants
KR20190041343A (en) Electronic apparatus and server for processing user utterance
CN115437500A (en) Service handling method and device based on gesture interaction and self-service equipment
KR20220018461A (en) server that operates a platform that analyzes voice and generates events
CN113807955A (en) Information auditing method and related equipment
EP4345645A1 (en) User question labeling method and device
CN112131903A (en) Equipment data analysis method, device, service platform, system and medium
CN115602160A (en) Service handling method and device based on voice recognition and electronic equipment
CN113221990B (en) Information input method and device and related equipment
CN111626834B (en) Intelligent tax processing method, device, terminal and medium
CN112562734B (en) Voice interaction method and device based on voice detection
KR102412643B1 (en) Personalized artificial intelligence kiosk device and service method using the same
CN113764097A (en) Medical advice data processing method, terminal device, server and storage medium
CN113505199A (en) Business response information processing method and device based on bank number calling terminal and service end system
CN111626684B (en) Intelligent tax processing method, device, terminal and medium
CN112381989A (en) Sorting method, device and system and electronic equipment
CN111754327A (en) Queuing prompting method and device
CN207818190U (en) A kind of information interaction system based on speech recognition
CN113516515B (en) Information pushing method, device and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination