CN111208899A - Interaction processing method and device, terminal and server - Google Patents

Interaction processing method and device, terminal and server Download PDF

Info

Publication number
CN111208899A
CN111208899A CN201811397854.2A CN201811397854A CN111208899A CN 111208899 A CN111208899 A CN 111208899A CN 201811397854 A CN201811397854 A CN 201811397854A CN 111208899 A CN111208899 A CN 111208899A
Authority
CN
China
Prior art keywords
control layer
interaction
instruction
processing instruction
input information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811397854.2A
Other languages
Chinese (zh)
Other versions
CN111208899B (en
Inventor
徐绍伟
周明智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811397854.2A priority Critical patent/CN111208899B/en
Publication of CN111208899A publication Critical patent/CN111208899A/en
Application granted granted Critical
Publication of CN111208899B publication Critical patent/CN111208899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application provides an interactive processing method, an interactive processing device, a terminal and a server. The client comprises a first control layer and a second control layer for performing service logic processing, and the second control layer and the server establish a full-duplex communication channel; receiving at least one first interaction processing instruction fed back by the server based on multi-modal interaction input information based on the full-duplex communication channel; and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation. The technical scheme provided by the embodiment of the application simplifies the interactive operation and improves the service processing efficiency.

Description

Interaction processing method and device, terminal and server
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to an interaction processing method, an interaction processing device, a client, a server and an electronic terminal.
Background
In an offline service place for selling goods, a user often needs to select goods according to own needs and inform a seller of the goods, and a seller provides the needed goods for customers according to information such as names of the goods provided by the customers.
In order to improve the selling efficiency, a self-service terminal is generally deployed in an online service place for a user to select commodities by self and automatically generate a business order, and commodity transaction can be completed without a seller.
In the current self-service terminal, a user executes screen interaction input, such as touching a screen or operating a key, to trigger a corresponding interaction processing instruction, and corresponding service operation is realized by executing the interaction processing instruction. For example, the self-service terminal may display a selection control for a commodity in a display interface, a user may trigger a commodity selection instruction by performing a screen click operation on the selection control, and a service order may be generated by interacting with a server based on the commodity selection instruction.
However, in practical applications, multiple business operations, such as adding commodities, deleting commodities, etc., may be required to be performed for one transaction through the self-service terminal, and one screen interaction input may only trigger one interaction processing instruction, so that a user may need to perform multiple screen interaction inputs, and the interaction operations are complex and the business processing efficiency is low.
Disclosure of Invention
The embodiment of the application provides an interactive processing method, an interactive processing device, a terminal, a server and a physical machine.
In a first aspect, an embodiment of the present application provides an interactive processing method, including:
a full-duplex communication channel is established between the client and the server;
receiving at least one first interaction processing instruction fed back by the server based on multi-modal interaction input information based on the full-duplex communication channel;
and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation.
In a second aspect, an embodiment of the present application provides an interactive processing method, including:
a full-duplex communication channel is established between the server side and a second control layer of the client side;
acquiring multi-mode interactive input information;
determining at least one first interaction processing instruction corresponding to the multi-modal interaction input information;
and sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and calling the first control layer of the client to process the at least one first interactive processing instruction by the second control layer so as to execute corresponding service operation.
In a third aspect, an embodiment of the present application provides an interactive processing method, including:
the client acquires at least one first interactive processing instruction transmitted by a second control layer; wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer;
processing the at least one first interactive processing instruction to execute a corresponding business operation;
detecting a second interactive processing instruction triggered by screen interactive input information;
and processing the second interactive processing instruction to execute corresponding business operation.
In a fourth aspect, an embodiment of the present application provides an interaction processing apparatus, including:
the first communication establishing module is used for establishing a full-duplex communication channel with the server;
the instruction receiving module is used for receiving at least one first interaction processing instruction which is fed back by the server and is determined based on multi-modal interaction input information based on the full-duplex communication channel;
and the calling execution module is used for calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding business operations.
In a fifth aspect, an embodiment of the present application provides an interaction processing apparatus, including:
the second communication establishing module is used for establishing a full-duplex communication channel with a second control layer of the client;
the information acquisition module is used for acquiring multi-mode interactive input information;
the instruction determining module is used for determining at least one first interaction processing instruction based on the multi-modal interaction input information;
and the instruction sending module is used for sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and the second control layer calls the first control layer of the client to process the at least one first interactive processing instruction so as to execute corresponding service operation.
In a sixth aspect, an embodiment of the present application provides an interaction processing apparatus, including:
the first instruction detection module is used for acquiring at least one first interactive processing instruction transmitted by the second control layer; wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer;
the first instruction execution module is used for processing the at least one first interactive processing instruction to execute corresponding business operation;
the second instruction detection module is used for detecting a second interactive processing instruction triggered by the screen interactive input information;
and the second instruction execution module is used for processing the second interactive processing instruction to execute corresponding service operation.
A seventh aspect, an embodiment of the present application provides a client, including a first control layer and a second control layer that perform service logic processing;
the first control layer is used for detecting a second interactive processing instruction triggered by screen interactive input information and processing the second interactive processing instruction to execute corresponding business operation; acquiring at least one first interactive processing instruction transmitted by the second control layer, and processing the at least one first interactive processing instruction to execute corresponding business operation;
the second control layer is used for establishing a full duplex communication channel with a server; receiving at least one first interaction processing instruction which is fed back by the server and determined based on multi-modal interaction input information based on the full-duplex communication channel; and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation.
In an eighth aspect, an embodiment of the present application provides a terminal, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
establishing a full-duplex communication channel with a server;
receiving at least one first interaction processing instruction which is fed back by the server and determined based on multi-modal interaction input information based on the full-duplex communication channel;
and processing the at least one first interactive processing instruction to respectively execute corresponding business operations.
In a ninth aspect, an embodiment of the present application provides a server, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
establishing a full-duplex communication channel with a second control layer of the client;
acquiring multi-mode interactive input information;
determining at least one first interaction processing instruction corresponding to the multi-modal interaction input information;
and sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and calling the first control layer of the client to process the at least one first interactive processing instruction by the second control layer so as to execute corresponding service operation.
In a tenth aspect, an embodiment of the present application provides a physical machine, which is integrated with the terminal provided in the eighth aspect and the server provided in the ninth aspect.
In the embodiment of the application, the control layer for the service logic processing of the client is divided into a first control layer and a second control layer, a full-duplex communication channel is established between the second control layer and the server, and based on the full-duplex communication channel, receiving at least one first interactive processing instruction fed back by the server based on multi-mode interactive input information, and then calling a first control layer to process the at least one first interactive processing instruction to respectively execute corresponding service operation, by adopting the embodiment of the application, the full-duplex communication channel can receive the request initiated by the server actively, ensure that the interactive processing instruction based on the multi-mode interactive input information feedback can be received, therefore, the service operation is realized based on the interactive processing instruction, a multi-mode interactive input mode is adopted, the operation is simple, and a plurality of interactive processing instructions can be triggered simultaneously.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram illustrating an embodiment of a client provided in the present application;
FIG. 2 is a flow chart illustrating one embodiment of an interaction processing method provided herein;
FIG. 3 is a flow chart illustrating a further embodiment of an interaction processing method provided by the present application;
FIG. 4 is a flow chart illustrating a further embodiment of an interaction processing method provided by the present application;
FIG. 5 is a schematic diagram illustrating an interaction processing method in an actual application according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram illustrating an embodiment of an interaction processing apparatus provided in the present application;
FIG. 7 is a schematic structural diagram of an interactive processing device according to another embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating an embodiment of a terminal provided by the present application;
FIG. 9 is a schematic diagram illustrating an interactive processing device according to another embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an embodiment of a server provided in the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical scheme of the embodiment of the application can be applied to application scenes for business object transaction based on user interaction input information through the self-service terminal, for example, the self-service terminal in an offline service place, for example, a food ordering machine deployed in a catering place, and the like, and the business object is also specifically a commodity. Of course, the method and the system can also be applied to other service scenes for performing service processing based on user interaction input information.
As described in the background art, currently, user interaction input information is usually screen interaction input information, and is implemented by a user touching an operation control or operating a physical key in a screen, and a user usually triggers execution of a plurality of business operations to fulfill a user requirement.
In order to improve interaction efficiency and service processing efficiency, the inventor thinks through a series of researches that people and machines can interact by combining with multiple modes, for example, man-machine interaction is carried out through voice, action or face input, and the interaction mode between people is simulated, so that a user can trigger and complete service processing only by carrying out multi-mode interaction input without executing screen interaction input, thereby simplifying interaction operation, improving interaction efficiency and ensuring service processing efficiency.
However, the inventor found that, currently, communication is performed between the client and the server based on the HTTP (HyperText transfer protocol), that is, in a network architecture of the client, a control layer for performing service logic processing performs service processing by communicating with the server based on the HTTP protocol, and the HTTP protocol is a stateless, connectionless, and unidirectional communication protocol. It employs a request/response model. The communication request can only be initiated by the client, and the server responds to the request. The identification of the multi-modal interactive input information needs to be executed by the server, and then a corresponding instruction is fed back to the client according to the identification result to trigger the business operation, so that the condition that the server needs to mainly initiate a request exists.
In order to implement multi-modal interactive input, the inventor further studies and provides a technical solution of an embodiment of the present application, in the embodiment of the present application, a control layer for performing service logic processing at a client is divided into a first control layer and a second control layer, a full duplex communication channel is established between the second control layer and a server, and based on the full duplex communication channel, at least one first interactive processing instruction based on multi-modal interactive input information feedback at the server is received, and then the first control layer is called to process the at least one first interactive processing instruction to respectively execute corresponding service operations, according to the embodiment of the present application, a request actively initiated by the server can be received by using the full duplex communication channel, so that an interactive processing instruction based on multi-modal interactive input information feedback can be received, thereby implementing service operations based on the interactive processing instruction, by adopting a multi-mode interactive input mode, the method is simple to operate, and can trigger a plurality of interactive processing instructions simultaneously, thereby further simplifying interactive operation, improving interactive efficiency and service processing efficiency.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram of a network architecture of a client according to an embodiment of the present application, where the client may include a first control layer 101 and a second control layer 102 for performing service logic processing;
the first control layer 101 may be a web control layer, which communicates with a server by using HTTP, and is the same as the control layer in the prior art;
the second control layer 102 communicates with the server using a full duplex communication protocol, which may be, for example, a websocket (a full duplex communication protocol).
Of course, the client may further include a view layer 103 for displaying data and the like.
The client may further include a data layer 104 for providing data services to the first control layer 101 and the second control layer 102;
in the prior art, a client mainly comprises a data layer, a view layer and a control layer, and the control layer performs service logic processing, but the control layer and a server are in one-way communication, only the control layer initiates a communication request, and the server responds to the request.
In the embodiment of the application, in order to realize processing based on multi-mode interactive input information, the client is composed of a data layer, a view layer, a first control layer and a second control layer, the second control layer is responsible for establishing a full-duplex communication channel with the server, can receive a communication request actively initiated by the server, then performs service processing, and specifically calls the first control layer to perform service operation, and the communication mode of the first control layer and the server is the same as that of the prior art.
It will be appreciated that the data layer, the view layer, the first control layer, and the second control layer may refer to program modules or program code used in the client to implement different functions.
The first control layer 101 may be configured to obtain at least one first interactive processing instruction transmitted by the second control layer, and process the at least one first interactive processing instruction to execute a corresponding service operation; detecting a second interactive processing instruction triggered by screen interactive input information, and processing the second interactive processing instruction to execute corresponding business operation;
the second control layer 102 may be configured to establish a full-duplex communication channel with a server; receiving at least one first interaction processing instruction fed back by the server based on multi-modal interaction input information based on the full-duplex communication channel; and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation.
The second control layer calls the first control layer to process the at least one first interactive processing instruction, namely, the at least one first interactive processing instruction is transferred to the first control layer.
Wherein, the first control layer executing the business operation may be providing data in combination with the data layer to realize the business operation.
If the service operation executed by the first control layer is related to data display, the view layer can be updated to display related data, and the like.
The multi-modal interactive input information provided by the user reaches the server, the server can determine at least one first interactive processing instruction, so that the at least one first interactive processing instruction is issued to the second control layer, and the second control layer calls the first control layer to process, so that corresponding service operation is realized.
The at least one first interaction processing instruction may be determined by the service end based on the multi-modal interaction input information, or may be identified from the multi-modal interaction input information.
For the screen interaction input information provided by the user, because each screen interaction input information is usually realized by operating a control or a key, and each control or key corresponds to a pre-configured instruction, the first control layer can directly detect and obtain a second interaction processing instruction corresponding to each screen interaction input information, and further can directly process the second interaction processing instruction, so as to realize corresponding business operation.
It should be noted that the first interactive processing instruction and the second interactive processing instruction both refer to related instructions for triggering service operations, for example, in the optional terminal, the interactive processing instruction may refer to an object selection instruction, an order update instruction, an order cancel instruction, an order settlement instruction, or the like. The terms "first" and "second" are used merely to distinguish interaction processing instructions triggered by different types of interaction input information.
With reference to the network structure shown in fig. 1, the following describes the technical solution of the present application in detail with respect to the first control layer, the second control layer, and the server.
Fig. 2 is a flowchart of an embodiment of an interaction processing method provided in an embodiment of the present application, which is described from the perspective of a second control layer of a client, where the method may include the following steps:
201: and establishing a full-duplex communication channel with the server.
Wherein, a full duplex communication channel with the server can be established based on a full duplex communication protocol.
The full duplex communication protocol may be, for example, WebSocket. Based on a full-duplex communication protocol, full-duplex communication can be realized, so that any party can push data to the other party through the established full-duplex communication channel. The full duplex communication channel is established only once, and the connection state can be kept all the time.
202: and receiving at least one first interaction processing instruction fed back by the server based on multi-modal interaction input information based on the full-duplex communication channel.
As an alternative, the method may further include:
collecting the multi-mode interactive input information through a first collection assembly;
and sending the multi-modal interaction input information to the server based on the full-duplex communication channel so that the server can determine at least one first interaction processing instruction corresponding to the multi-modal interaction input information.
In one practical application, the first collecting component can be configured by a physical machine deploying the client, and the client can call the first collecting component to collect the multi-modal interactive input information.
As another alternative, the multi-modal interaction input information may be collected by the server via the second collection component and uploaded to the server.
In a practical application, the client and the server may be configured in the same physical machine, for example, a food ordering machine configured in an online service place at present, and both the client and the server are configured, which is an integrated device. The second acquisition component is configured in the physical machine, and the server can directly call the second acquisition component to acquire multi-mode interactive input information.
Wherein the multi-modal interaction input information may include one or more of biometric input information, voice interaction information, motion input information, gesture input information, and other sensory interaction information.
The multi-modal interactive input information can convey at least one first interactive processing instruction, so that compared with screen interactive input information, the multi-modal interactive input information can convey a plurality of interactive processing instructions at one time, batch interactive operation can be realized, multi-modal interactive input is not required, interactive operation can be further simplified, and interactive efficiency is improved.
For example, the multimodal interaction input information may include voice interaction information, in an object selection scenario, a user may input an object selection instruction for each of a plurality of business objects by voice, so as to recognize the voice interaction information, that is, obtain a plurality of object selection instructions, each object selection instruction is used to select one business object, thereby batch selection of the business objects may be achieved, and if a screen interaction input manner is adopted, selection of the plurality of business objects may be achieved only by operating an operation control or a key corresponding to each business object.
203: and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service processing.
In this embodiment, the second control layer is responsible for performing full duplex communication with the server to obtain at least one first interactive processing instruction determined by the server based on the multi-modal interactive input information, and then the first control layer is invoked to process the at least one first interactive processing instruction, so that corresponding service processing can be performed. Because the interaction processing instruction is triggered based on the multi-mode interaction input information, compared with a screen interaction input mode, the interaction operation is simple, the interaction efficiency is high, and therefore the service processing efficiency can be improved.
The first control layer may be a web control layer, that is, may communicate with the server based on the HTTP protocol, and the processing of the at least one first interactive processing instruction may involve interaction with the server, where a processing manner of the interactive processing instruction is the same as that in the prior art, and only an obtaining manner of the interactive processing instruction is different from that in the prior art, so that a specific processing manner is specifically defined in the present application.
The first control layer can also detect a second interactive processing instruction triggered by the screen interactive input information, and process the second interactive processing instruction to execute corresponding service operation.
That is, the instruction processing function of the first control layer can be multiplexed, the second control layer only needs to acquire and transmit the interactive processing instruction, and the interactive processing instruction can be processed by calling the first control layer.
The first interaction processing instruction may include reply content, and at least one first interaction processing instruction may include at least one reply content.
Thus, in some embodiments, said invoking the first control layer to process the at least one first interaction processing instruction to perform the respective business operation may comprise:
and calling the first control layer to output at least one reply content.
For example, in a scenario of selecting a service object through a self-service terminal, the multi-modal interaction input information may be biometric input information, such as human face input information, so that the server, in response to the multi-modal interaction input information, may determine that a user is present in a certain range of the self-service terminal, and in order to improve user experience, output welcome information through the self-service terminal, that is, may feed back reply content including the welcome information.
Wherein, the reply content can comprise displayable content and voice content; the displayable content may include pictures and/or text, among others.
Thus, in some embodiments, said invoking the first control layer to output at least one reply content comprises:
if any reply content comprises displayable content, calling a first control layer to update a view layer based on the displayable content so as to display the displayable content;
and if any reply content comprises the voice content, calling the first control layer to control the audio component to play the voice content.
The displayable content needs to be displayed by updating the view layer, and the voice content can be played by adopting the audio component.
Of course, after the first interactive processing instruction processing may include reply content corresponding to the multi-modal interactive input information, a service processing instruction may also be included, so that invoking the first control layer to process the at least one first interactive processing instruction to respectively perform corresponding service operations may include:
and calling the first control layer to output at least one reply content, and processing the at least one service processing instruction to execute corresponding service operation.
In some embodiments, after the invoking the first control layer to process the at least one first interaction processing instruction to respectively perform the corresponding business operations, the method may further include:
receiving a processing result corresponding to the at least one first interactive processing instruction fed back by the server based on the full-duplex communication channel;
and calling the first control layer to output the processing result.
The processing result may include a service operation response result, and may also include an instruction execution success or failure result, and the like.
Further, in certain embodiments, the method may further comprise:
receiving prompt contents corresponding to the processing result fed back by the server based on the full-duplex communication channel;
and calling the first control layer to output the prompt content.
The prompt content can be used for prompting the business operation response result or the execution success or failure result of the instruction, and the like.
Wherein the prompt content may include displayable content and/or voice content; therefore, the calling the first control layer to output the prompt content includes:
if the prompt content comprises displayable content, calling the first control layer to update the view layer so as to display the displayable content;
and if the prompt content comprises voice content, calling the first control layer to control an audio component to play the voice content.
In one option, the multimodal interaction input information may include biometric input information;
the receiving, based on the full-duplex communication channel, the at least one first interaction processing instruction determined based on the multi-modal interaction input information fed back by the server may include:
based on the full-duplex communication channel, receiving a page switching instruction fed back by the server end in response to the biological characteristic input information;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling the first control layer to process the page switching instruction so as to switch from the screen saver page to a preset page.
In addition, the page switching instruction may further include reply content;
thus, the invoking the first control layer to process the page switch instruction to switch from a screen saver page to a predetermined page may comprise:
and calling a first control layer to process the page switching instruction so as to switch from a screen saver page to a preset page, and outputting the reply content.
The reply content may include displayable content and/or voice content, and therefore, the first control layer may be specifically invoked to update the view layer to display the displayable content, and/or invoke the first control layer to control the audio component to output the voice content.
In an application scenario in which the self-service terminal selects a service object, that is, when the server side can determine that a user exists in the self-service terminal within a certain range according to the biometric input information, a page switching instruction can be issued, so that the second control layer calls the first control layer to switch a screen saver page to a predetermined page, and the predetermined service, for example, can select a page for the service object, which can contain relevant information of the service object.
Meanwhile, reply content can also be fed back to call the first control layer by the second control layer to output the reply content, and the reply content can contain welcome information or selection prompt information, such as "welcome to use the self-service terminal, you can input a desired business object by voice".
The biological characteristic input information can be specifically face input information and is acquired through the image acquisition assembly.
In another alternative, the multimodal interaction input information may include voice interaction information;
the receiving, based on the full-duplex communication channel, the at least one first interaction processing instruction determined based on the multi-modal interaction input information fed back by the server may include:
and receiving at least one first interaction processing instruction which is fed back by the server and is obtained by identification from the voice interaction information based on the full-duplex communication channel.
The voice interaction information is obtained by collecting voice input of a user, and the user can input at least one first interaction processing instruction by voice, so that the at least one first interaction processing instruction can be obtained by recognition through a voice recognition technology.
The processing result corresponding to the at least one first interactive processing instruction fed back by the server and the prompt content corresponding to the processing result can be received based on the full-duplex communication channel.
And the first control layer can be called to output the processing result and the prompting content.
The prompt content can comprise displayable content and/or voice content, and the first control layer outputs the prompt content to realize a man-machine conversation effect.
The user can input a plurality of first interactive processing instructions through voice, and the second control layer respectively calls the first control layer to execute the instructions.
For example, in an application scenario in which a self-service terminal performs service object selection, a plurality of service objects may be added, modified, or deleted by voice input at a time.
In the embodiment of the application, the method and the device for processing the first interaction can be applied to an application scene for realizing business object selection based on voice interaction information, for example, a meal ordering scene, and the at least one first interaction processing instruction includes a processing instruction related to meal ordering operation.
Thus, in some embodiments, the at least one first interaction processing instruction may comprise at least one object selection instruction, wherein each object selection instruction may comprise an object identification, a selection requirement, and the like. In the ordering scene, the object may specifically refer to a food item, the object selection instruction may specifically refer to a food item selection instruction, and the object identifier may specifically refer to a food item name.
The invoking the first control layer to process the at least one first interaction processing instruction to respectively perform the corresponding service operations may include:
and calling the first control layer to process at least one object selection instruction so as to generate a business order based on the at least one object selection instruction.
The processing result for the at least one object selection instruction may include the service order, that is, a service operation response result, so that the second control layer invokes the first control layer to update the view layer to display the service order.
The prompt content corresponding to the Processing result may be combined with the success or failure result of the instruction execution and the service operation response result, and the dialog content obtained by NLP (Natural Language Processing) of the voice interaction information.
For example, the voice interaction message is "i want a cup of cafe latte without sugar". The prompt may be "good, added" or "not good, please re-enter".
In some embodiments, the at least one first interaction processing instruction may comprise at least one order update instruction for a business order; each order updating instruction comprises an object identifier and an operation type; the service order may be generated based on the above-mentioned voice interaction input method, or may be implemented based on a screen interaction input method, or in combination with voice interaction or screen interaction.
The invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling the first control layer to process at least one order updating instruction so as to update the service order.
The business order may be updated based specifically on the object identification and the operation type.
The operation type may include, for example, a business object addition or deletion, a business object attribute information modification (e.g., a quantity modification), and so forth.
Therefore, according to the embodiment of the application, the updating of a plurality of business objects in the business order can be realized in a voice interaction mode.
The processing result for the at least one order updating instruction may include the updated service order, that is, the service operation response result, so that the second control layer invokes the first control layer to update the view layer to display the updated service order.
The prompt content corresponding to the processing result can be combined with the success or failure result of the instruction execution and the service operation response result, and the dialog content obtained by the NLP processing of the voice interaction information.
For example, the voice interaction message is "latte changes to two cups". The prompt may be "good, modified" or "not good, please re-enter".
In some embodiments, the at least one first interaction processing instruction may comprise an order settlement instruction for a business order;
the invoking the first control layer to process the at least one first interaction processing instruction to respectively perform the corresponding service operations may include:
and calling a first control layer to process the order settlement instruction so as to execute settlement operation.
Optionally, the invoking the first control layer to process the order settlement instruction to perform a settlement operation includes:
and calling a first control layer to process the order settlement instruction, acquiring settlement prompt information comprising a payment link, and updating a view layer to display the settlement prompt information.
The payment link is used for calling a third party payment system to carry out online payment.
Further, in some embodiments, the at least one first interaction processing instruction may comprise an order cancellation instruction for a business order;
the invoking the first control layer to process the at least one first interaction processing instruction to respectively perform the corresponding service operations may include:
and calling a first control layer to process the order canceling instruction so as to cancel the service order.
Further, in some embodiments, the multimodal interaction input information may include biometric input information as well as voice interaction information;
the receiving, based on the full-duplex communication channel, at least one first interaction processing instruction identified and obtained from the voice interaction information fed back by the server may include:
and receiving a detection result fed back by the server side and responding to the existence of the biological characteristic input information, and identifying at least one obtained first interaction processing instruction from the voice interaction information based on the full-duplex communication channel.
If the server detects that the biological characteristic input information exists, the user can be considered to exist in a certain range of the self-service terminal, the self-service terminal needs to be used, the obtained voice interaction information is identified at the moment, otherwise, the process can be ended, and no processing is carried out, so that the accuracy of business processing is improved.
Fig. 3 is a flowchart of another embodiment of an interaction processing method provided in an embodiment of the present application, where this embodiment describes a technical solution of the present application from a server side perspective, and the method may include the following steps:
301: and establishing a full-duplex communication channel with a second control layer of the client.
302: and acquiring multi-modal interactive input information.
303: and determining at least one first interaction processing instruction corresponding to the multi-modal interaction input information.
304: and sending the at least one first interactive processing instruction to a second control layer based on the full-duplex communication channel, and calling the first control layer of the client by the second control layer to process the at least one first interactive processing instruction so as to execute corresponding service operation.
As an optional manner, the obtaining the multi-modal interaction input information may include:
receiving multi-mode interactive input information sent by the second control layer based on the full-duplex communication channel; the multi-modal interaction input information is acquired by the client through a first acquisition component.
As another alternative, the obtaining multimodal interaction input information includes:
and acquiring multi-mode interactive input information through a second acquisition assembly.
In some embodiments, the multimodal interaction input information may include voice interaction information;
the determining at least one first interaction processing instruction based on the multi-modal interaction input information comprises:
at least one first interaction processing instruction in the voice interaction information is recognized.
Fig. 4 is a flowchart of another embodiment of an interactive processing method provided in an embodiment of the present application, where this embodiment describes a technical solution of the present application from the perspective of a first control layer in a client, and the method may include the following steps:
401: and acquiring at least one first interactive processing instruction transmitted by the second control layer.
Wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer.
402: and processing the at least one first interactive processing instruction to execute the corresponding business operation.
403: and detecting a second interactive processing instruction triggered by the screen interactive input information.
404: and processing the second interactive processing instruction to execute corresponding business operation.
The first control layer can directly detect and process a second interactive processing instruction corresponding to the screen interactive input information.
The second control layer can acquire at least one first interactive processing instruction fed back by the service end based on the multi-modal interactive input information, and multiplexes the first control layer, and the first control layer processes the at least one first interactive processing instruction.
For convenience of understanding, as shown in the interaction processing diagram shown in fig. 5, the first control layer 101 in the client may establish an HTTP communication channel with the server 501, and the second control layer 102 may establish a full-duplex communication channel with the server 501.
For the multi-modal interaction input information, the multi-modal interaction input information may be acquired by the second control layer 102 and uploaded to the server 501, or may be acquired by the server 501.
After the server 501 identifies the first interactive processing instruction, the server may feed back at least one first interactive processing instruction, and the second control layer 102 may call the first control layer 101 to process the at least one first interactive processing instruction.
For the screen interaction input information, the first control layer 101 may directly detect and obtain the second interaction processing instruction and process the second interaction processing instruction.
When the first control layer 101 performs the interactive processing instruction processing, if the display operation is involved, the view layer 103 may be updated to display, and the user may view the view.
The second control layer can sequentially call the first control layers to respectively process aiming at least one first interactive processing instruction; of course, all the data can be sent to the first control layer and processed by the first control layer.
Fig. 6 is a schematic structural diagram of an embodiment of an interaction processing apparatus provided in an embodiment of the present application, where the apparatus may be used as a second control layer configured in a client, and the apparatus may include:
a first communication establishing module 601, configured to establish a full-duplex communication channel with a server;
an instruction receiving module 602, configured to receive, based on the full-duplex communication channel, at least one first interaction processing instruction determined based on multi-modal interaction input information and fed back by the server;
the calling and executing module 603 is configured to call the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operations.
In some embodiments, the apparatus may further comprise:
the first acquisition module is used for acquiring the multi-mode interactive input information through a first acquisition assembly; and sending the multi-modal interaction input information to the server based on the full-duplex communication channel so that the server can determine at least one first interaction processing instruction corresponding to the multi-modal interaction input information.
In some embodiments, the multi-modal interaction input information is collected by the server through a second collection component and uploaded to the server.
In some embodiments, the first interaction processing instruction includes reply content;
the call execution module is specifically configured to call the first control layer to output at least one reply content.
Optionally, the call execution module may be specifically configured to, if any reply content includes displayable content, call the first control layer to update the view layer based on the displayable content to display the displayable content; and if any reply content comprises the voice content, calling the first control layer to control the audio component to play the voice content.
In some embodiments, the apparatus may further comprise:
a first output calling module, configured to receive, based on the full-duplex communication channel, a processing result corresponding to the at least one first interactive processing instruction fed back by the server; and calling the first control layer to output the processing result.
In some embodiments, the apparatus may further comprise:
the second output calling module is used for receiving prompt contents corresponding to the processing results fed back by the server side based on the full-duplex communication channel; and calling the first control layer to output the prompt content.
In some embodiments, the step of calling the first control layer to output the prompt content by the second output calling module specifically includes: if the prompt content comprises displayable content, calling the first control layer to update the view layer so as to display the displayable content; and if the prompt content comprises voice content, calling the first control layer to control an audio component to play the voice content.
In some embodiments, the multi-modal interaction input information comprises biometric input information;
the instruction receiving module is specifically configured to receive, based on the full-duplex communication channel, a page switching instruction fed back by the server in response to the biometric input information.
The call execution module may be specifically configured to call the first control layer to process the page switch instruction to switch from a screen saver page to a predetermined page.
In some embodiments, the multimodal interaction input information comprises voice interaction information;
the instruction receiving module is specifically configured to receive, based on the full-duplex communication channel, at least one first interaction processing instruction identified and obtained from the voice interaction information and fed back by the server.
In some embodiments, the at least one first interaction processing instruction comprises at least one object selection instruction; wherein each object selection instruction comprises an object identification;
the call execution module is specifically configured to call the first control layer to process the at least one object selection instruction, so as to generate a service order based on the at least one object selection instruction.
In some embodiments, the at least one first interactive processing instruction comprises at least one order update instruction; each order updating instruction comprises an object identifier and an operation type;
the call execution module is specifically configured to call the first control layer to process the at least one order update instruction to update the service order.
In some embodiments, the at least one first interaction processing instruction comprises an order settlement instruction for a business order;
the call execution module is specifically configured to call the first control layer to process the order settlement instruction to execute a settlement operation.
Optionally, the call execution module may be specifically configured to call the first control layer to process the order settlement instruction, obtain settlement prompt information including a payment link, and update the view layer to display the settlement prompt information.
In some embodiments, the multimodal interaction input information further comprises biometric input information;
the call execution module may be specifically configured to receive, based on the full-duplex communication channel, a detection result, in response to the presence of the biometric input information, fed back by the server, and identify at least one obtained first interaction processing instruction from the voice interaction information.
The interaction processing apparatus shown in fig. 6 may execute the interaction processing method shown in the embodiment shown in fig. 2, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the surrender processing apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 7 is a schematic structural diagram of an embodiment of an interaction processing apparatus according to an embodiment of the present application, where the apparatus may be used as a first control layer configured in a client, and the apparatus may include:
a first instruction detection module 701, configured to obtain at least one first interactive processing instruction transmitted by a second control layer; wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer;
a first instruction execution module 702, configured to process the at least one first interaction processing instruction to execute a corresponding service operation;
a second instruction detection module 703, configured to detect a second interactive processing instruction triggered by screen interactive input information;
a second instruction executing module 704, configured to process the second interactive processing instruction to execute a corresponding service operation.
The interaction processing apparatus shown in fig. 7 may execute the interaction processing method shown in the embodiment shown in fig. 4, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the surrender processing apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the interaction processing apparatus of the embodiment shown in fig. 6 may be implemented as a terminal, as shown in fig. 8, which may include a storage component 801 and a processing component 802;
the storage component 801 stores one or more computer instructions for execution invoked by the processing component 802.
The processing component 802 is configured to:
establishing a full-duplex communication channel with a server;
receiving at least one first interaction processing instruction which is fed back by the server and determined based on multi-modal interaction input information based on the full-duplex communication channel;
and processing the at least one first interactive processing instruction to respectively execute corresponding business operations.
Optionally, the processing component 802 may be further configured to:
detecting a second interactive processing instruction triggered by screen interactive input information;
and processing the second interactive processing instruction to execute corresponding business operation.
The processing component 802 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 801 is configured to store various types of data to support operations at the terminal. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the terminal may also necessarily comprise other components, such as an input/output interface, a communication component, a display component, a first acquisition component, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the terminal and other devices, and the like.
The display element may be an Electroluminescent (EL) element, a liquid crystal display or a microdisplay having a similar structure, or a retina-directable display or similar laser scanning type display.
The first acquisition component can be used for acquiring multi-modal interaction input information, and the processing component can be further used for sending the multi-modal interaction input information acquired by the first acquisition component to the server so that the server can determine at least one first interaction processing instruction corresponding to the multi-modal interaction input information.
The first acquisition component may include, for example, one or more of an image acquisition component, an audio acquisition component, and a biometric acquisition component, among others.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the method for processing interaction in the embodiment shown in fig. 2 or fig. 4 may be implemented.
In one practical application, the at least one first interactive processing instruction comprises processing instructions related to ordering operation; the terminal may include a meal ordering machine.
Fig. 9 is a schematic structural diagram of another embodiment of an interaction processing apparatus according to an embodiment of the present application, where the apparatus may include:
a second communication establishing module 901, configured to establish a full-duplex communication channel with a second control layer of the client;
an information obtaining module 902, configured to obtain multi-modal interactive input information;
an instruction determining module 903, configured to determine at least one first interaction processing instruction based on the multi-modal interaction input information;
an instruction sending module 904, configured to send the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, where the second control layer invokes the first control layer of the client to process the at least one first interactive processing instruction so as to execute a corresponding service operation.
In some embodiments, the information obtaining module may be specifically configured to receive multimodal interaction input information sent by the second control layer based on the full-duplex communication channel; the multi-modal interaction input information is acquired by the client through a first acquisition component.
In some embodiments, the information acquisition module may be specifically configured to acquire multimodal interaction input information via a second acquisition component.
In some embodiments, the multimodal interaction input information may include voice interaction information;
the instruction determining module may be specifically configured to identify at least one first interaction processing instruction in the voice interaction information.
The interaction processing apparatus shown in fig. 9 may execute the interaction processing method shown in the embodiment shown in fig. 3, and the implementation principle and the technical effect are not described again. The specific manner in which each module and unit of the surrender processing apparatus in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the interaction processing apparatus of the embodiment shown in fig. 9 may be implemented as a server, which may include a storage component 1001 and a processing component 1002 as shown in fig. 10;
the storage component 1001 stores one or more computer instructions for the processing component 1002 to invoke for execution.
The processing component 1002 is configured to:
establishing a full-duplex communication channel with a second control layer of the client;
acquiring multi-mode interactive input information;
determining at least one first interaction processing instruction based on the multi-modal interaction input information;
and sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and calling the first control layer of the client to process the at least one first interactive processing instruction by the second control layer so as to execute corresponding service operation.
Among other things, the processing component 1002 may include one or more processors to execute computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1001 is configured to store various types of data to support operations in the server. The memory components may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may of course also comprise other components, such as input/output interfaces, communication components, secondary acquisition components, etc.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the server and other devices, and the like.
The second acquisition component can be used for acquiring multi-modal interactive input information, and the processing component can specifically acquire the multi-modal interactive input information through the second acquisition component.
The second acquisition component may include, for example, one or more of an image acquisition component, an audio acquisition component, and a biometric acquisition component, among others.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the method for processing interaction in the embodiment shown in fig. 2 may be implemented.
In addition, an embodiment of the present application further provides a physical machine, where the terminal in the embodiment shown in fig. 8 and the server in the embodiment shown in fig. 10 are integrated, that is, the terminal and the server are disposed in the same physical machine, and in an actual application, the physical machine may be a self-service terminal in an offline service place, so as to perform a business object transaction, such as a meal ordering machine, based on user interaction input information, where the at least one first interaction processing instruction and the second interaction processing instruction may also be processing instructions related to a meal ordering operation.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (28)

1. An interactive processing method, comprising:
a full-duplex communication channel is established between the client and the server;
receiving at least one first interaction processing instruction fed back by the server based on multi-modal interaction input information based on the full-duplex communication channel;
and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation.
2. The method of claim 1, further comprising:
the client acquires the multi-mode interactive input information through a first acquisition assembly;
and sending the multi-modal interaction input information to the server based on the full-duplex communication channel so that the server can determine at least one first interaction processing instruction corresponding to the multi-modal interaction input information.
3. The method of claim 1, wherein the multimodal interaction input information is collected by the server via a second collection component and uploaded to the server.
4. The method of claim 1, wherein the first interaction processing instruction comprises reply content;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling the first control layer to output at least one reply content.
5. The method of claim 4, wherein the invoking the first control layer to output at least one reply content comprises:
if any reply content comprises displayable content, calling a first control layer to update a view layer based on the displayable content so as to display the displayable content;
and if any reply content comprises the voice content, calling the first control layer to control the audio component to play the voice content.
6. The method of claim 1, wherein after the invoking the first control layer to process the at least one first interaction processing instruction to perform the respective business operation, the method further comprises:
receiving a processing result corresponding to the at least one first interactive processing instruction fed back by the server based on the full-duplex communication channel;
and calling the first control layer to output the processing result.
7. The method of claim 6, further comprising:
the client receives prompt content corresponding to the processing result fed back by the server based on the full-duplex communication channel;
and calling the first control layer to output the prompt content.
8. The method of claim 7, wherein said invoking the first control layer to output the hint comprises:
if the prompt content comprises displayable content, calling the first control layer to update the view layer so as to display the displayable content;
and if the prompt content comprises voice content, calling the first control layer to control an audio component to play the voice content.
9. The method of claim 1, wherein the multi-modal interaction input information comprises biometric input information;
the receiving, based on the full-duplex communication channel, at least one first interaction processing instruction determined based on multi-modal interaction input information fed back by the server includes:
based on the full-duplex communication channel, receiving a page switching instruction fed back by the server end in response to the biological characteristic input information;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling the first control layer to process the page switching instruction so as to switch from the screen saver page to a preset page.
10. The method of claim 1, wherein the multimodal interaction input information comprises voice interaction information;
the receiving, based on the full-duplex communication channel, at least one first interaction processing instruction determined based on multi-modal interaction input information fed back by the server includes:
and receiving at least one first interaction processing instruction which is fed back by the server and is obtained by identification from the voice interaction information based on the full-duplex communication channel.
11. The method of claim 10, wherein the at least one first interaction processing instruction comprises at least one object selection instruction; wherein each object selection instruction comprises an object identification;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling a first control layer to process the at least one object selection instruction so as to generate a business order based on the at least one object selection instruction.
12. The method of claim 10, wherein the at least one first interactive processing instruction comprises at least one order update instruction; each order updating instruction comprises an object identifier and an operation type;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling a first control layer to process the at least one order updating instruction so as to update the service order.
13. The method of claim 10, wherein the at least one first interactive processing instruction comprises an order settlement instruction for a business order;
the invoking the first control layer to process the at least one first interactive processing instruction to respectively execute the corresponding service operations includes:
and calling a first control layer to process the order settlement instruction so as to execute settlement operation.
14. The method of claim 13, wherein said invoking the first control layer to process the order settlement instructions to perform settlement operations comprises:
and calling a first control layer to process the order settlement instruction, acquiring settlement prompt information comprising a payment link, and updating a view layer to display the settlement prompt information.
15. The method of claim 10, wherein the multi-modal interaction input information further comprises biometric input information;
the receiving, based on the full-duplex communication channel, at least one first interaction processing instruction identified and obtained from the voice interaction information fed back by the server includes:
and receiving at least one first interaction processing instruction which is fed back by the server and is obtained from the voice interaction information in response to the detection result of the existence of the biological characteristic input information based on the full-duplex communication channel.
16. An interactive processing method, comprising:
a full-duplex communication channel is established between the server side and a second control layer of the client side;
acquiring multi-mode interactive input information;
determining at least one first interaction processing instruction corresponding to the multi-modal interaction input information;
and sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and calling the first control layer of the client to process the at least one first interactive processing instruction by the second control layer so as to execute corresponding service operation.
17. The method of claim 16, wherein obtaining multimodal interaction input information comprises:
receiving multi-mode interactive input information sent by the second control layer based on the full-duplex communication channel; the multi-modal interaction input information is acquired by the client through a first acquisition component.
18. The method of claim 16, wherein obtaining multimodal interaction input information comprises:
and acquiring multi-mode interactive input information through a second acquisition assembly.
19. The method of claim 16, wherein the multimodal interaction input information comprises voice interaction information;
the determining at least one first interaction processing instruction based on the multi-modal interaction input information comprises:
at least one first interaction processing instruction in the voice interaction information is recognized.
20. An interactive processing method, comprising:
the client acquires at least one first interactive processing instruction transmitted by a second control layer; wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer;
processing the at least one first interactive processing instruction to execute a corresponding business operation;
detecting a second interactive processing instruction triggered by screen interactive input information;
and processing the second interactive processing instruction to execute corresponding business operation.
21. An interaction processing apparatus, comprising:
the first communication establishing module is used for establishing a full-duplex communication channel with the server;
the instruction receiving module is used for receiving at least one first interaction processing instruction which is fed back by the server and is determined based on multi-modal interaction input information based on the full-duplex communication channel;
and the calling execution module is used for calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding business operations.
22. An interaction processing apparatus, comprising:
the second communication establishing module is used for establishing a full-duplex communication channel with a second control layer of the client;
the information acquisition module is used for acquiring multi-mode interactive input information;
the instruction determining module is used for determining at least one first interaction processing instruction based on the multi-modal interaction input information;
and the instruction sending module is used for sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and the second control layer calls the first control layer of the client to process the at least one first interactive processing instruction so as to execute corresponding service operation.
23. An interaction processing apparatus, comprising:
the first instruction detection module is used for acquiring at least one first interactive processing instruction transmitted by the second control layer; wherein the at least one first interaction processing instruction is determined by a server based on multi-modal interaction input information and sent to the second control layer based on a full-duplex communication channel established with the second control layer;
the first instruction execution module is used for processing the at least one first interactive processing instruction to execute corresponding business operation;
the second instruction detection module is used for detecting a second interactive processing instruction triggered by the screen interactive input information;
and the second instruction execution module is used for processing the second interactive processing instruction to execute corresponding service operation.
24. A client is characterized by comprising a first control layer and a second control layer which are used for carrying out service logic processing;
the first control layer is used for detecting a second interactive processing instruction triggered by screen interactive input information and processing the second interactive processing instruction to execute corresponding business operation; acquiring at least one first interactive processing instruction transmitted by the second control layer, and processing the at least one first interactive processing instruction to execute corresponding business operation;
the second control layer is used for establishing a full duplex communication channel with a server; receiving at least one first interaction processing instruction which is fed back by the server and determined based on multi-modal interaction input information based on the full-duplex communication channel; and calling the first control layer to process the at least one first interactive processing instruction so as to respectively execute corresponding service operation.
25. A terminal, comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
establishing a full-duplex communication channel with a server;
receiving at least one first interaction processing instruction which is fed back by the server and determined based on multi-modal interaction input information based on the full-duplex communication channel;
and processing the at least one first interactive processing instruction to respectively execute corresponding business operations.
26. The terminal of claim 25, wherein the at least one first interactive processing instruction comprises processing instructions related to meal ordering operations; the terminal comprises a meal ordering machine.
27. A server comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
establishing a full-duplex communication channel with a second control layer of the client;
acquiring multi-mode interactive input information;
determining at least one first interaction processing instruction corresponding to the multi-modal interaction input information;
and sending the at least one first interactive processing instruction to the second control layer based on the full-duplex communication channel, and calling the first control layer of the client to process the at least one first interactive processing instruction by the second control layer so as to execute corresponding service operation.
28. A physical machine incorporating a terminal as claimed in claim 25 or 26 and a server as claimed in claim 27.
CN201811397854.2A 2018-11-22 2018-11-22 Interactive processing method, device, terminal and server Active CN111208899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811397854.2A CN111208899B (en) 2018-11-22 2018-11-22 Interactive processing method, device, terminal and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811397854.2A CN111208899B (en) 2018-11-22 2018-11-22 Interactive processing method, device, terminal and server

Publications (2)

Publication Number Publication Date
CN111208899A true CN111208899A (en) 2020-05-29
CN111208899B CN111208899B (en) 2023-05-26

Family

ID=70782123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811397854.2A Active CN111208899B (en) 2018-11-22 2018-11-22 Interactive processing method, device, terminal and server

Country Status (1)

Country Link
CN (1) CN111208899B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110286586A1 (en) * 2010-04-21 2011-11-24 Angel.Com Multimodal interactive voice response system
CN105469215A (en) * 2015-12-11 2016-04-06 北京中科安瑞科技有限责任公司 Interactive terminal used for supervised person in supervision room
CN105721239A (en) * 2016-01-18 2016-06-29 网易(杭州)网络有限公司 Game test method, device and game system
CN106657370A (en) * 2017-01-03 2017-05-10 腾讯科技(深圳)有限公司 Data transmission method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110286586A1 (en) * 2010-04-21 2011-11-24 Angel.Com Multimodal interactive voice response system
CN105469215A (en) * 2015-12-11 2016-04-06 北京中科安瑞科技有限责任公司 Interactive terminal used for supervised person in supervision room
CN105721239A (en) * 2016-01-18 2016-06-29 网易(杭州)网络有限公司 Game test method, device and game system
CN106657370A (en) * 2017-01-03 2017-05-10 腾讯科技(深圳)有限公司 Data transmission method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FEI WU;QIANG XU;SHIHAI SHAO;CHENXING LI;DONGLIN LIU;YOUXI TANG: "\"Performance of auxiliary antenna-based self-interference cancellation in full-duplex radios\"" *
叶楠;: "基于网关的多屏互动系统的设计与实现" *

Also Published As

Publication number Publication date
CN111208899B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
US11831589B2 (en) Method and system of obtaining contact information for a person or an entity
US10554600B2 (en) Method and device for sending emoticons
US9052806B2 (en) User interface for presenting media items of social networking service in media reel
WO2016045277A1 (en) Method, device, and system for information acquisition
WO2018137476A1 (en) Information processing method, first terminal, second terminal and server
US20150181126A1 (en) Information processing device, data processing method thereof, and program
US20160165128A1 (en) Capturing and sending images and videos based on a single user interaction with a user interface element
US20170208022A1 (en) Chat system
CN114726947B (en) Message display method, device, user terminal and readable storage medium
CN104360792A (en) Method, device and mobile terminal for operating contacts
US20160164809A1 (en) Identifying and selecting contacts to include in a face tray of a messaging application
US20130080539A1 (en) Systems and methods for performing quick link communications
CN112235412B (en) Message processing method and device
CN111010335A (en) Chat expression sending method and device, electronic equipment and medium
US11012382B2 (en) State display information transmission system using chatbot
CN111208899B (en) Interactive processing method, device, terminal and server
CN112187628B (en) Method and device for processing identification picture
CN113852540A (en) Information sending method, information sending device and electronic equipment
US11385767B2 (en) Method of presenting user interface, apparatus for presenting user interface, and computer-program product
US20150200889A1 (en) System and method for sending messages
CN107943379A (en) Sectional drawing sending method and device, computer installation and computer-readable recording medium
WO2009156009A1 (en) User interface for a mobile device
EP3690791A1 (en) Information processing method, program, and information processing device
JP6247781B1 (en) Method, program, and information processing apparatus
CA3059685A1 (en) Information processing method, program, and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant