CN115230724A - Interaction method, electronic device and computer storage medium - Google Patents

Interaction method, electronic device and computer storage medium Download PDF

Info

Publication number
CN115230724A
CN115230724A CN202110427541.2A CN202110427541A CN115230724A CN 115230724 A CN115230724 A CN 115230724A CN 202110427541 A CN202110427541 A CN 202110427541A CN 115230724 A CN115230724 A CN 115230724A
Authority
CN
China
Prior art keywords
multimedia effect
multimedia
interaction
effect
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110427541.2A
Other languages
Chinese (zh)
Inventor
陈国雄
田发景
刘玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect and Technology Shanghai Corp
Original Assignee
Pateo Connect and Technology Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect and Technology Shanghai Corp filed Critical Pateo Connect and Technology Shanghai Corp
Priority to CN202110427541.2A priority Critical patent/CN115230724A/en
Publication of CN115230724A publication Critical patent/CN115230724A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/089Driver voice
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/21Voice

Abstract

The application relates to an interaction method, electronic equipment and a computer storage medium, wherein the interaction method comprises the following steps: in response to the received instruction, determining an interaction result of the instruction; determining a multimedia effect corresponding to the interaction result; and outputting the multimedia effect. By the mode, the corresponding multimedia effect can be output based on the interaction result of the instruction, so that the display of the interaction result is more vivid, and the information transmission is more accurate and rapid.

Description

Interaction method, electronic device and computer storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to an interaction method, an electronic device, and a computer storage medium.
Background
With the rapid development of the automobile industry, automobiles become indispensable travel tools in life of people, the time for people to make a trip with the automobiles is increasing, and more automobiles introduce an instruction interaction system, so that people can conveniently and quickly interact with the automobiles naturally. However, most of the current instruction interaction systems respond To the user request in a single manner, that is, TTS (abbreviation of Text To Speech, i.e., "Text To Speech") broadcast plus fixed Text content display cannot vividly and quickly deliver core information To the user, and is easy To make the user feel boring, and cannot arouse the user's interest in using the instruction interaction system and generate interest in the result displayed by the instruction interaction system.
Disclosure of Invention
An object of the present application is to provide an interaction method, which can solve the above technical problem, and output a corresponding multimedia effect while outputting an interaction result according to the interaction result, so that the presentation of the interaction result is more accurate and interesting.
In order to achieve the above object, the present application provides an interaction method comprising the steps of:
in response to a received instruction, determining an interaction result of the instruction;
determining a multimedia effect corresponding to the interaction result, wherein the multimedia effect comprises at least one of sound, animation and video;
and outputting the multimedia effect.
The present application further provides an electronic device, including:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the apparatus to perform steps according to the interaction method as described above.
The present application further provides a computer storage medium having computer program instructions stored thereon; which when executed by a processor implement the interaction method as described above.
The application discloses an interaction method, electronic equipment and a computer storage medium, wherein the interaction method comprises the following steps: in response to the received instruction, determining an interaction result of the instruction; determining a multimedia effect corresponding to the interaction result, wherein the multimedia effect comprises at least one of sound, animation and video; and outputting the multimedia effect. By the mode, the corresponding multimedia effect can be output based on the interaction result of the instruction, so that the display of the interaction result is more vivid, and the information transmission is more accurate and rapid.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, the present application may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present application more clearly understood, the following preferred embodiments are specifically described below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flowchart of an interaction method according to an embodiment of the present invention;
FIG. 2 is a timing diagram of an interaction method provided by an embodiment of the invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following embodiments are provided to illustrate the present disclosure, and other advantages and effects will be apparent to those skilled in the art from the disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "a, B or C" or "a, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
Fig. 1 is a schematic flowchart of an interaction method according to an embodiment of the present invention, and as shown in fig. 1, the interaction method according to the embodiment of the present invention includes:
in response to the received instruction, an interaction result of the instruction is determined, step 110.
The user can initiate a voice instruction to perform interaction through the voice client, and the server performs voice recognition and semantic processing on the received voice information after receiving the voice information, so as to determine a voice interaction result of the voice information. For example: the user inputs "how much the weather is today? Through voice recognition and semantic processing, the semantic of the voice information can be determined to inquire the weather of the current position today, and then the weather of the current position today is inquired as a voice interaction result. The user can also input an instruction through characters, gestures or pressing corresponding keys and the like, and the system identifies the received characters or gestures so as to determine the interaction result of the instruction.
And step 120, determining the multimedia effect corresponding to the interaction result.
After the interaction result is determined, the corresponding multimedia effect can be determined according to the interaction result, namely the multimedia effect is associated with the interaction result, so that the interaction result can be embodied more accurately, vividly and interestingly.
In an embodiment, determining a multimedia effect corresponding to an interaction result specifically includes:
determining a target scene according to the interaction result, wherein the target scene corresponds to at least one multimedia effect;
and selecting at least one multimedia effect from the at least one multimedia effect corresponding to the target scene as a target multimedia effect corresponding to the interaction result.
The multimedia effects comprise sound effects and/or animations, one or more multimedia effects can be configured for the same scene, and for example, a scene of thunder weather can be configured with a plurality of multimedia effects of different lightning frequencies, different thunder sound sizes or different Wuyun colors. And determining a scene corresponding to the interaction result in each scene, namely the target scene according to the interaction result. After the target scene is determined, the multimedia effects corresponding to the target scene are obtained, and one or more multimedia effects are selected from at least one multimedia effect as the target multimedia effect according to a preset rule.
In an embodiment, selecting at least one multimedia effect from at least one multimedia effect corresponding to a target scene as a target multimedia effect corresponding to an interaction result specifically includes:
acquiring user data of a current user, wherein the user data comprises at least one of user identity information, the current position of the user and time for inputting an instruction by the user;
screening a multimedia effect corresponding to a target scene according to user data;
and selecting at least one multimedia effect as a target multimedia effect according to the screening result.
The user data comprises at least one of user identity information, the current position of a user and time for inputting instructions by the user, the time for inputting the instructions by the user can be subdivided into different granularities such as quarterly, month, weekend, morning, afternoon and evening, the user identity information can comprise information such as gender and preference, the multimedia effects corresponding to the target scene are screened according to the user data and preset screening conditions, and at least one multimedia effect is selected as the target multimedia effect according to the screening result. That is, the screened result may be one or more multimedia effects, and the target multimedia effect may be one or more multimedia effects selected from the screened result. By screening the multimedia effect corresponding to the target scene, the multimedia effect matched with the interactive result can be more accurately obtained, so that the matching degree of the target multimedia effect and the interactive result is improved, and the accuracy is improved.
In an embodiment, the filtering the multimedia effect corresponding to the target scene according to the user data specifically includes:
determining at least one multimedia effect tag according to the user data;
and acquiring a multimedia effect corresponding to at least one multimedia effect label in the target scene as a screening result.
When adding scene multimedia effects in a background management system, an administrator or a user can add different multimedia effects to the same scene according to multiple different dimensions such as the time (different granularities such as seasons, months, weekends, morning, afternoon, evening) for obtaining the multimedia effects, user identity information (such as gender), the current position of the user and the like, and add multimedia effect labels such as the applicable time, gender, location and the like for the multimedia effects when adding the multimedia effects, wherein the content of the multimedia effect labels can be set in a manner of adapting to user data such as the user identity information, the current position of the user, the time of a user input instruction and the like. By matching the user data with the information corresponding to the multimedia effect tag, the efficiency and accuracy of screening can be improved. Accordingly, the multimedia effect corresponding to the at least one multimedia effect tag in the target scene is obtained as a screening result, that is, the screening result may be an action corresponding to one or more multimedia effect tags.
In one embodiment, selecting at least one multimedia effect as the target multimedia effect according to the filtering result includes:
if the number of the screened multimedia effects is 0, taking a preset default multimedia effect as a target multimedia effect;
if the number of the screened multimedia effects is 1, taking the screened 1 multimedia effects as target multimedia effects;
and if the number of the screened multimedia effects is more than 1, randomly selecting at least one multimedia effect as a target multimedia effect.
In actual implementation, default multimedia effects of a specific scene can also be set in the background management system. That is, after the target scene is determined, the target multimedia effect is a preset default multimedia effect. The background management system can configure new multimedia effects in real time, modify the existing multimedia effects, take effect immediately and dynamically adjust the selection rules of various multimedia effects in the same scene.
For example, the user can play a special song through a voice instruction, and a vehicle machine driven by the user includes a voice interaction system and is already connected with a network through WIFI or 4G. The method comprises the steps that a user requests to play songs related to birthday, christmas, new year and the like through voice, for example, input voice information is 'play birthday happy song', a voice interaction system plays music, a scene related to birthday happy can be determined to be a target scene, the target scene can contain more than one multimedia effect, and for example, if the identity information of the user is female, the multimedia effect of romantic and lovely birthday which is more popular with most female is selected; if the current is winter, the multimedia effect of the birthday related to the winter can be selected; in actual implementation, a plurality of multimedia effects can be superposed, the birthday multimedia effects corresponding to the female and the winter are superposed at the same time, or the multimedia effects corresponding to the female and/or the winter are superposed to the multimedia effects related to the birthday, so that the user can experience the birthday atmosphere suitable for the user data. Similarly, the multimedia effect corresponding to the atmosphere of special days such as Christmas, new year and the like can also be realized.
For example, weather is inquired through a voice instruction, a vehicle machine driven by a user comprises a voice interaction system, and is already connected with a network through WIFI or 4G. The user inquires weather through voice, such as: how do the weather today? The voice interaction system performs voice recognition and semantic processing, the weather result obtained by inquiry is thunderstorm weather, and then a multimedia effect is obtained in the scene of the thunderstorm weather, for example, the sound effect is rain sound + thunderstorm sound, the animation is rain sound + lightning, the sound effect and the animation can be different according to the difference of current time such as summer or winter, gender such as male or female, and location such as city or region, and a user can accurately and quickly know that the weather result is the thunderstorm weather through the sound effect even if not listening to the voice broadcast or watching the screen carefully, and even can transmit the information of the rain strength according to the rain sound. The result display is more vivid, and the information transmission is more accurate and rapid.
For example, the date of the day is inquired through a voice instruction, a vehicle machine driven by a user comprises a voice interaction system, and is already connected with a network through WIFI or 4G. The user inquires the date of the day through voice, such as: today's number? And the voice interactive system carries out TTS broadcasting, and if the current day is a special festival, the corresponding multimedia effect of the festival is displayed. Such as: the father node can display a small section of video representing father love on the same day. The user may remember the information due to the multimedia effects in the special festivals, and the time and the distant father may be drawn to narrate the family at busy, so that the user experience is improved.
In practical implementation, the above embodiment may also input an instruction by text, a gesture, or pressing a corresponding key.
And step 130, outputting the multimedia effect.
After the target multimedia effect is determined, the target multimedia effect is output while the voice interaction result is output. The location of the multimedia effect output includes: full screen, following speech image, screen left side, screen right side, screen upper side, screen lower side, screen center, upper left corner, upper right corner, lower left corner, lower right corner. Wherein, the position of the multimedia effect output can be determined by two ways: cloud service determination and client determination.
In one embodiment, outputting the multimedia effect specifically includes:
outputting a multimedia effect according to a preset output parameter; and/or the presence of a gas in the gas,
and determining an output parameter of the multimedia effect according to the current display content of the terminal, and outputting the multimedia effect according to the output parameter.
Wherein, the output position of the multimedia effect can be determined by the cloud service. The original parameters of the multimedia effect preset by an administrator or a user when the scene multimedia effect is added to the background management system include at least one of playing position, size, frequency and volume, and the original parameters of the multimedia effect are parameters when the multimedia effect file is stored. The playing position in the multimedia effect output parameter can be preset according to the content, the importance, the original size and the original duration of the multimedia effect, and the set playing position is used as a preset output parameter; at least one of the size, duration, volume and frequency of the original parameters of the multimedia effect can be preset as a preset output parameter according to the content and the importance. Therefore, after the target multimedia effect is determined, the target multimedia effect is played on the output equipment of the client according to the preset output parameters.
Or the administrator or the user does not set the playing position when adding the scene multimedia effect, and after the client obtains the interaction result and the target multimedia effect, the client determines the output parameters of the multimedia effect according to the current display content of the terminal and outputs the corresponding multimedia effect according to the output parameters.
In actual implementation, determining an output parameter of a multimedia effect according to the current display content of the terminal specifically includes:
acquiring the current display content of the terminal;
analyzing the association degree of the interaction result and the display content and/or the importance level of the display content;
and determining output parameters of the multimedia effect according to the association degree and/or the importance level, wherein the output parameters comprise at least one of playing position, size, times and volume.
The method comprises the steps of analyzing the obtained association degree of the current display content and the interaction result of the terminal and/or the importance level of the display content, and determining whether to adjust the original parameter of the multimedia effect to serve as the output parameter of the multimedia effect according to the association degree and/or the importance level, so that the user can experience the multimedia effect and can avoid influencing the use of the current display content.
If the currently received interaction result is related to the currently displayed content, the playing of the multimedia effect does not affect the currently displayed content, and the currently displayed content can be more vivid and interesting. For example: if the current display content is a navigation interface and the currently received interaction result is a navigation-related result, determining that the association degree of the currently received interaction result and the current display content is high, and if the playing of the multimedia effect does not influence the current display content, playing the multimedia effect in the center of a screen according to the original size of the multimedia effect; otherwise, one or more parameters of playing position, size, times, volume and the like are adjusted.
If the importance of the content of the current interface is higher, the importance can be divided according to whether the driving safety is influenced or not. For example: navigation is important information during driving, and thus navigation can be preset to display contents with high importance. And if the correlation degree between the current interaction result and the display content is low, when the output parameter of the multimedia effect is determined, the playing position in the original parameter of the multimedia effect is adjusted, the playing position is adjusted to be played at the lower right corner of the screen, or the maximum size or the number of times of the multimedia effect is limited, and finally the adjusted parameter is determined to be the output parameter of the multimedia effect. The original parameters of the multimedia effect are adjusted according to the importance of the content of the current interface, so that the display content of the current navigation influenced by the output of the multimedia effect according to the original parameters can be avoided, and the interestingness of the multimedia effect can be experienced.
In actual implementation, the multimedia effect may be played in the original size with a high priority for the association degree under the condition of high importance level and high association degree. Or may be prioritized by the importance level, and after determining that the importance level is high, the output parameter is determined by limiting the original parameter of the multimedia effect, whether or not it is associated. For example, the navigation is the preset display content with high importance, the original parameters of the multimedia effect are adjusted no matter whether the interaction result is associated with the navigation or not, and the adjusted parameters are used as the output parameters of the multimedia effect. May be limiting the original maximum size, original position, etc. of the multimedia effect; if the importance of the content of the current interface is lower. For example: and a music playing interface and the like are played in the center of the screen according to the original size of the multimedia effect.
In an embodiment, the interaction method of the present invention further includes:
and when the multimedia effect is output, the interactive result is output by voice and/or characters.
And when the multimedia effect is output, the interactive result is output through voice and/or characters. For example, the user queries weather by voice, such as: "how do the weather today? The voice interaction system performs voice recognition and semantic processing, the weather result obtained by inquiry is thunderstorm weather, and then the multimedia effect is obtained in the scene of the thunderstorm weather, for example, the sound effect is rain sound + thunder sound, and the animation is rain sound + lightning. And then, outputting the obtained multimedia effect, simultaneously outputting the inquired weather condition by voice broadcasting, or outputting corresponding words while voice broadcasting, or only outputting the inquired weather condition by words. By the mode, the query result is output in a superposition mode or in multiple modes, so that the interaction process is richer, and the user experience is improved.
Fig. 2 is a timing chart of an interaction method according to an embodiment of the invention, please refer to fig. 2. The manager configures the multimedia effect for various scenes in a background management system: the sound effect, the animation and the animation playing position can configure various multimedia effects for the same scene, and store the multimedia effect modification of the appointed scene. Taking an input voice instruction as an example, a user initiates voice interaction to a voice client, the voice client uploads voice information to a voice cloud service, the voice cloud service acquires multimedia effects corresponding to scenes in a voice processing result through ASR (automatic speech recognition) recognition and semantic processing, the multimedia effect management service queries all multimedia effects according to the scenes, selects one of the multimedia effects according to rules, returns the multimedia effect to the voice cloud service, the voice cloud service further returns the semantic processing result and the multimedia effect thereof to the voice client, and the voice client broadcasts TTS and displays semantic content and plays audio and animation in the multimedia effect.
The application discloses an interaction method, electronic equipment and a computer storage medium, wherein the interaction method comprises the following steps: in response to the received instruction, determining an interaction result of the instruction; determining a multimedia effect corresponding to the interaction result, wherein the multimedia effect comprises at least one of sound, animation and video; and outputting the multimedia effect. By the mode, the corresponding multimedia effect can be output based on the interaction result of the instruction, so that the display of the interaction result is more vivid, and the information transmission is more accurate and rapid.
Second embodiment
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and applicable scope of the embodiments of the present disclosure. As shown in fig. 2, the present application further provides an electronic device 600 comprising a processing unit 601, which may execute the method of the embodiments of the present disclosure according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction processor and/or related chip sets and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. The processor 601 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602 and the RAM603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to embodiments of the present disclosure by executing programs in the ROM602 and/or RAM 603. Note that the above-described programs may also be stored in one or more memories other than the ROM602 and the RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in one or more memories.
In this embodiment, the processor 601, by executing a program stored in one or more memories, may determine an interaction result of an instruction in response to the received instruction; determining a multimedia effect corresponding to the interaction result; and outputting the multimedia effect. By the mode, the corresponding multimedia effect can be output based on the interaction result of the instruction, so that the display of the interaction result is more vivid, and the information transmission is more accurate and rapid.
Electronic device 600 may also include input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604, according to an embodiment of the disclosure. The electronic device 600 may also include one or more of the following components connected to an input/output (I/O) interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. Further, a drive, removable media. A computer program such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like may also be connected to an input/output (I/O) interface 605 as necessary, so that the computer program read out therefrom is installed into the storage section 608 as necessary.
Method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, an embodiment of the present disclosure includes a computer program product. Comprising a computer program, carried on a computer readable storage medium, the computer program containing program code for performing the method shown in figure 1. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from a removable medium. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, and the like described above may be implemented by computer program modules according to embodiments of the present disclosure.
Embodiments of the present application also provide a computer-readable storage medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
The specific process of executing the above method steps in this embodiment is detailed in the related description of fig. 1, and is not described herein again.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. An interaction method, comprising the steps of:
in response to a received instruction, determining an interaction result of the instruction;
determining a multimedia effect corresponding to the interaction result;
and outputting the multimedia effect.
2. The interaction method according to claim 1, wherein the determining the multimedia effect corresponding to the interaction result comprises:
determining a target scene according to the interaction result, wherein the target scene corresponds to at least one multimedia effect;
and selecting at least one multimedia effect from at least one multimedia effect corresponding to the target scene as a target multimedia effect corresponding to the interaction result.
3. The interaction method according to claim 2, wherein said selecting at least one multimedia effect from the at least one multimedia effect corresponding to the target scene as the target multimedia effect corresponding to the interaction result comprises:
acquiring user data of a current user, wherein the user data comprises at least one of user identity information, the current position of the user and the time for the user to input the instruction;
screening a multimedia effect corresponding to the target scene according to the user data;
and selecting at least one multimedia effect as the target multimedia effect according to the screening result.
4. The interaction method according to claim 3, wherein the filtering the multimedia effect corresponding to the target scene according to the user data includes:
determining at least one multimedia effect tag according to the user data;
and acquiring a multimedia effect corresponding to the at least one multimedia effect label in the target scene as a screening result.
5. The interaction method according to claim 3, wherein the selecting at least one multimedia effect as the target multimedia effect according to the filtering result comprises:
if the number of the screened multimedia effects is 0, setting a preset default multimedia effect as the target multimedia effect;
if the number of the screened multimedia effects is 1, taking the screened 1 multimedia effects as the target multimedia effect;
and if the number of the screened multimedia effects is more than 1, randomly selecting at least one multimedia effect as the target multimedia effect.
6. The interaction method as claimed in claim 1, wherein said outputting the multimedia effect comprises:
outputting the multimedia effect according to preset output parameters; and/or the presence of a gas in the gas,
and determining the output parameters of the multimedia effect according to the current display content of the terminal, and outputting the multimedia effect according to the output parameters.
7. The interaction method according to claim 6, wherein the determining the output parameters of the multimedia effect according to the current display content of the terminal comprises:
acquiring the current display content of the terminal;
analyzing the association degree of the interaction result and the display content and/or the importance level of the display content;
and determining output parameters of the multimedia effect according to the association degree and/or the importance level, wherein the output parameters comprise at least one of playing position, size, frequency and volume.
8. The interaction method of claim 1, wherein the method further comprises:
and when the multimedia effect is output, outputting the interaction result by voice and/or text.
9. An electronic device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the apparatus to perform the steps of the interaction method of any of claims 1 to 8.
10. A computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement the interaction method of any one of claims 1 to 8.
CN202110427541.2A 2021-04-21 2021-04-21 Interaction method, electronic device and computer storage medium Pending CN115230724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110427541.2A CN115230724A (en) 2021-04-21 2021-04-21 Interaction method, electronic device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110427541.2A CN115230724A (en) 2021-04-21 2021-04-21 Interaction method, electronic device and computer storage medium

Publications (1)

Publication Number Publication Date
CN115230724A true CN115230724A (en) 2022-10-25

Family

ID=83666794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110427541.2A Pending CN115230724A (en) 2021-04-21 2021-04-21 Interaction method, electronic device and computer storage medium

Country Status (1)

Country Link
CN (1) CN115230724A (en)

Similar Documents

Publication Publication Date Title
US11474779B2 (en) Method and apparatus for processing information
JP7335062B2 (en) Voice service providing method and apparatus
US11688402B2 (en) Dialog management with multiple modalities
CN107832434B (en) Method and device for generating multimedia play list based on voice interaction
CN109308357B (en) Method, device and equipment for obtaining answer information
CN107463700B (en) Method, device and equipment for acquiring information
JP2015118708A (en) Method and apparatus for providing search results
CN107733722B (en) Method and apparatus for configuring voice service
KR20190021409A (en) Method and apparatus for playing voice
CN111460179A (en) Multimedia information display method and device, computer readable medium and terminal equipment
CN109036397A (en) The method and apparatus of content for rendering
CN106681598A (en) Information input method and device
CN108509442B (en) Search method and apparatus, server, and computer-readable storage medium
CN110971983B (en) Video question answering method, equipment and storage medium
CN105279140B (en) Text display method, server, terminal and system
CN115230724A (en) Interaction method, electronic device and computer storage medium
CN115134670A (en) Multimedia playing method, device, storage medium and program product
CN112380871A (en) Semantic recognition method, apparatus, and medium
CN110035298B (en) Media quick playing method
CN109276886B (en) Text generation method, system and terminal equipment
CN109299223B (en) Method and device for inquiring instruction
CN111782992A (en) Display control method, device, equipment and readable storage medium
CN107967308B (en) Intelligent interaction processing method, device, equipment and computer storage medium
CN112287173A (en) Method and apparatus for generating information
CN111933133A (en) Intelligent customer service response method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination