CN112380334B - Intelligent interaction method and device and intelligent equipment - Google Patents

Intelligent interaction method and device and intelligent equipment Download PDF

Info

Publication number
CN112380334B
CN112380334B CN202011413811.6A CN202011413811A CN112380334B CN 112380334 B CN112380334 B CN 112380334B CN 202011413811 A CN202011413811 A CN 202011413811A CN 112380334 B CN112380334 B CN 112380334B
Authority
CN
China
Prior art keywords
user
interaction
target
content
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011413811.6A
Other languages
Chinese (zh)
Other versions
CN112380334A (en
Inventor
王琨
谢志栋
李文轩
葛莹
孙宇
丁琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN202011413811.6A priority Critical patent/CN112380334B/en
Publication of CN112380334A publication Critical patent/CN112380334A/en
Application granted granted Critical
Publication of CN112380334B publication Critical patent/CN112380334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an intelligent interaction method, an intelligent interaction device and intelligent equipment. The method comprises the following steps: determining a user target in an application scene; determining guidance content associated with the user target and an interaction mode associated with the application scenario; providing the guidance content based on the interaction mode. The embodiment of the invention can provide the user with the desired interactive mode and the guiding content, judge whether the current guiding scheme is suitable or not by collecting the user reaction, adjust the control strategy in real time and intelligently generate the guiding content more suitable for the current user.

Description

Intelligent interaction method and device and intelligent equipment
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to an intelligent interaction method, an intelligent interaction device and intelligent equipment.
Background
With the popularization of computer technology, people's lives nowadays gradually enter the intelligent era. People begin to apply intelligent technologies such as smart televisions, smart navigation, smart home, and the like in daily life. The intelligent technology can provide convenient and fast service in various aspects of people's life.
However, under the condition of high centralization and intellectualization of the functions of the current intelligent device, the guiding content on the intelligent device often deviates from the expectation of the user, and the interaction mode cannot be well adapted to the current task, so that the guiding reminding mode function on the intelligent device has the defect of insufficient use.
Disclosure of Invention
The invention provides an intelligent interaction method, an intelligent interaction device and intelligent equipment, which can provide guidance content meeting the expectation of a user.
The technical scheme of the embodiment of the invention is as follows:
an intelligent interaction method, comprising:
determining a user target in an application scene;
determining guidance content associated with the user target and an interaction mode associated with the application scenario;
providing the guidance content based on the interaction mode.
In one embodiment, the application scenario includes an interaction class scenario, wherein the determining a user goal in the application scenario includes: and acquiring user behavior data in an application scene, and determining the user target based on the user behavior data.
In one embodiment, the application scenario includes a random event class scenario, and the determining the interaction mode associated with the application scenario includes: monitoring characteristics of a specified object in the application scene; when the characteristics change, determining an interaction triggering opportunity; or
The application scene comprises a periodic event class scene, wherein the determining of the interaction mode associated with the application scene comprises: monitoring a periodic trigger value in the application scene; and when the periodic trigger value occurs, determining an interaction trigger opportunity.
In one embodiment, the determining the user goal based on user behavior data comprises:
user behavior data comprising a plurality of temporally related user actions is input to a trained neural network model to output, by the neural network model, a user objective corresponding to the user behavior data.
In one embodiment, the method further comprises a process of pre-training the neural network model, the process comprising:
providing historical user behavior data to a neural network model to train the neural network model, wherein the historical user behavior data comprises n user historical actions correlated in time, the 1 st to the n-1 st user actions being input to the neural network model, the n user actions being output from the neural network model, and n is a positive integer of at least 2.
In one embodiment, when the user target is to adjust the device volume, the interaction mode includes a voice prompt directly adjusting to a target volume value;
when the user target is the channel of the adjusting device, the interactive mode comprises that a voice prompt is directly adjusted to a target channel value;
when the user aims at viewing the content of the equipment, the interactive mode comprises that voice prompt is used for directly viewing the target content or gesture prompt is used for directly viewing the target content;
when the user target is application starting, the interaction mode comprises providing a shortcut entrance of the application;
and when the user target is to access a link, the interaction mode comprises providing a shortcut entrance of the link.
In one embodiment, further comprising:
detecting a user action with respect to the guidance content;
and adjusting the interaction triggering opportunity of the interaction mode or the guide content based on the user action.
In one embodiment, the adjusting the interaction triggering opportunity of the interaction means based on the user action includes:
advancing the interaction trigger opportunity when the user action is issued within a first predetermined time of receiving the guidance content;
maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content;
lagging the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content;
wherein the first predetermined time is less than a second predetermined time, which is less than a third predetermined time.
In one embodiment, the adjusting the guidance content based on the user action comprises:
maintaining the guidance content when the user action matches the guidance content;
when the user action does not match the guidance content, adjusting the guidance content so that the guidance content matches the user action.
An intelligent interaction device, comprising:
the first determination module is used for determining a user target in an application scene;
a second determination module to determine guidance content associated with the user target and an interaction means associated with the application scenario;
and the providing module is used for providing the guide content based on the interaction mode.
In one embodiment, the application scenario includes an interaction class scenario, and the first determining module is configured to collect user behavior data in the application scenario, and determine the user target based on the user behavior data.
In one embodiment, the application scenario includes a random event class scenario, and the second determining module is configured to monitor a characteristic of a specified object in the application scenario; when the characteristics change, determining an interaction triggering opportunity; or
The application scene comprises a periodic event scene, and the second determining module is used for monitoring a periodic trigger value in the application scene; and when the periodic trigger value appears, determining an interaction trigger opportunity.
In one embodiment, the first determination module is configured to input user behavior data comprising a plurality of temporally related user actions to a trained neural network model to output, by the neural network model, a user goal corresponding to the user behavior data.
In one embodiment, the first determining module is further configured to provide historical user behavior data to a neural network model to train the neural network model, where the historical user behavior data includes n user historical actions related in time, the 1 st user action through the n-1 st user action are used as inputs of the neural network model, the n user action is used as an output of the neural network model, and n is a positive integer of at least 2.
In one embodiment, when the user target is to adjust the device volume, the interaction mode includes a voice prompt directly adjusting to a target volume value;
when the user target is the channel of the adjusting device, the interactive mode comprises that a voice prompt is directly adjusted to a target channel value;
when the user aims at viewing the content of the equipment, the interactive mode comprises that voice prompt is used for directly viewing the target content or gesture prompt is used for directly viewing the target content;
when the user target is to start an application, the interaction mode comprises providing a shortcut entrance of the application;
and when the user target is to access a link, the interaction mode comprises providing a shortcut entrance of the link.
In one embodiment, further comprising:
an adjustment module to detect a user action for the guidance content; and adjusting the interaction triggering opportunity or the guide content of the interaction mode based on the user action.
In one embodiment, the adjusting module is configured to advance the interaction trigger opportunity when the user action is issued within a first predetermined time of receiving the guidance content; maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content; lagging the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content; wherein the first predetermined time is less than a second predetermined time, which is less than a third predetermined time.
In one embodiment, the adjusting module is configured to maintain the guidance content when the user action matches the guidance content; when the user action does not match the guidance content, adjusting the guidance content so that the guidance content matches the user action.
A smart device comprising a processor and a memory;
the memory stores an application program executable by the processor for causing the processor to perform the intelligent interaction method as described in any one of the above.
A computer readable storage medium having computer readable instructions stored therein for performing the intelligent interaction method as recited in any of the above.
According to the technical scheme, in the embodiment of the invention, the user target in the application scene is determined; determining guidance content associated with a user target and an interaction mode associated with an application scenario; and providing the guide content based on the interactive mode. Therefore, the embodiment of the invention can provide the user with the desired interactive mode and the guiding content by analyzing the user target in the application scene.
In addition, the embodiment of the invention also collects the user reaction to judge whether the current guidance scheme is proper, adjusts the control strategy in real time and intelligently generates the guidance content more suitable for the current user.
Drawings
Fig. 1 is a flowchart of an intelligent interaction method according to an embodiment of the present invention.
Fig. 2 is an architecture diagram of an intelligent interactive system according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of the working phase of intelligent interaction according to an embodiment of the invention.
FIG. 4 is a schematic diagram of guided interaction of an interaction class scene according to an embodiment of the present invention.
FIG. 5 is a flow chart of the processing of an event queue according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of event prediction based on a Recurrent Neural Network (RNN) model according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating a guiding interaction of a random event according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating a guidance interaction for a periodic event according to an embodiment of the present invention.
FIG. 9 is a diagram illustrating adjusting a guidance interaction based on feedback information, according to an embodiment of the invention.
Fig. 10 is a diagram illustrating adjustment of guidance content according to an embodiment of the present invention.
FIG. 11 is a diagram illustrating an overall process of guiding a user to perform intelligent interaction according to an embodiment of the present invention.
FIG. 12 is a diagram illustrating guided interaction for simple interaction according to an embodiment of the present invention.
FIG. 13 is a diagram illustrating a guiding interaction for strongly relevant user behavior according to an embodiment of the present invention.
Fig. 14 is a schematic diagram of guidance interaction for dynamically adjusting feedback timing to meet a user's expectation according to an embodiment of the present invention.
Fig. 15 is a block diagram of an intelligent interaction device according to the present invention.
Fig. 16 is a block diagram of a smart device having a memory-processor architecture according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings.
For simplicity and clarity of description, the invention will be described below by describing several representative embodiments. Numerous details of the embodiments are set forth to provide an understanding of the principles of the invention. It will be apparent, however, that the invention may be practiced without these specific details. Some embodiments are not described in detail, but rather are merely provided as frameworks, in order to avoid unnecessarily obscuring aspects of the invention. Hereinafter, "comprising" means "including but not limited to", "according to '8230;' 8230;" means "according to at least '8230;' 8230;, but not limited to only according to '8230;' 8230;". In view of the language convention of chinese, the following description, when it does not specifically state the number of a component, means that the component may be one or more, or may be understood as at least one.
Aiming at the current situation that the intelligent interaction function carried on the current intelligent equipment is obviously insufficient in use, the embodiment of the invention overcomes the defects that the guiding reminding can not be provided according to the user behavior and the target content characteristics and the reminding and guiding mode can not be well adapted to the use experience of the current task in the prior art, and provides a technical scheme for guiding the user to use the intelligent interaction mode to carry out human-computer interaction based on scene factors.
The embodiment of the invention solves the problems that under the condition of high centralization and intellectualization of the functions of the current intelligent equipment, the utilization rate of partial intelligent functions is low and the intelligent equipment can not actively serve the user, improves the value of intelligent interaction on human-computer interaction, and further improves the interaction experience.
The embodiment of the invention provides a technical scheme for guiding a user to use an intelligent interaction mode to carry out human-computer interaction based on scene factors, and particularly relates to user behavior identification and analysis, content attribute feature management, judgment of a current use scene information intervention mode of equipment and provision of intelligent guidance human-computer interaction in a scene expected by the user. Specifically, the embodiment of the invention provides a prompting system and a prompting method for guiding a user to use intelligent interaction based on user behavior expectation/content attribute characteristics. The method mainly comprises the following steps:
(1) The trigger mechanism of the boot system is distinguished according to different use scenes.
In the use interactive scene of the user, the system collects the current behavior data of the user within a specified time for analysis, predicts the purpose of the current behavior according to the current behavior data of the user and provides corresponding intelligent operation guidance and prompt for the user; when the designated features appear, a corresponding guide mechanism is triggered according to the feature changes, and a user is helped to better use the intelligent device in a convenient mode.
(2) And collecting the use experience of the user in real time, evaluating the system, adjusting an intelligent guide strategy according to the actual use effect, and autonomously generating interactive contents according with the habit of the user.
Therefore, the method and the device predict the expectation of the user according to the establishment of the labeling management library of the related content and the analysis of the behavior context of the user in the current scene. Moreover, the embodiment of the invention matches a proper interaction mode according to the characteristics of the use scene of the user and the expectation of the behavior target, provides a corresponding use guidance prompt and strengthens the intelligent man-machine interaction mode in the aspect of equipment use.
Fig. 1 is a flowchart of an intelligent interaction method according to an embodiment of the present invention.
As shown in fig. 1, the method includes:
step 101: and determining a user target in an application scene.
In one embodiment, the application scenario includes an interaction class scenario, wherein determining the user target in the application scenario in step 101 includes: and acquiring user behavior data in an application scene, and determining the user target based on the user behavior data. Preferably, determining the user goal based on user behavior data comprises: user behavior data comprising a plurality of temporally related user actions is input to a trained neural network model to output, by the neural network model, a user objective corresponding to the user behavior data.
In one embodiment, the neural network model may also be trained in advance. The training process comprises the following steps: providing historical user behavior data to a neural network model to train the neural network model, wherein the historical user behavior data comprises n user historical actions correlated in time, the 1 st user action through the n-1 st user action serve as inputs of the neural network model, the n user action serves as an output of the neural network model, and n is a positive integer of at least 2.
The application scenarios may also include random event class scenarios and periodic event class scenarios.
Step 102: determining guidance content associated with the user target and an interaction means associated with the application scenario.
Exemplarily, when the application scenario is a random event type scenario, the interaction manner associated with the application scenario specifically includes:
(1) When the user aims to adjust the volume of the equipment, the interaction mode comprises that the voice prompt is directly adjusted to the target volume value.
(2) When the user is targeting the tuning device channel, the interaction means includes voice prompts that are tuned directly to the target channel value.
(3) When the user aims at viewing the device content, the interaction mode comprises that voice prompt directly views the target content or gesture prompt directly views the target content.
(4) When the user aims at starting the application, the interaction mode comprises providing a shortcut entrance of the application.
(5) When the user aims to access the link, the interaction mode comprises providing a shortcut entrance of the link.
While the above exemplary description describes a typical example of the interaction means associated with an application scenario, those skilled in the art will appreciate that this description is merely exemplary and is not intended to limit the scope of embodiments of the present invention.
Step 103: providing the guidance content based on the interaction mode.
Here, guidance content is provided to the user based on the interaction means determined in step 102.
In one embodiment, the method further comprises: detecting a user action with respect to the guidance content; and adjusting the interaction triggering time of the interaction mode or the guide content based on the user action.
Wherein adjusting the interaction trigger opportunity of the interaction means based on the user action comprises:
(1) Advancing the interaction trigger opportunity when the user action is issued within a first predetermined time of receiving the guidance content.
(2) Maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content.
(3) Delaying the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content.
Wherein the first predetermined time is less than the second predetermined time, and the second predetermined time is less than the third predetermined time.
Further, adjusting the guidance content based on the user action includes: maintaining the guidance content when the user action matches the guidance content; when the user action does not match the guidance content, adjusting the guidance content such that the guidance content matches the user action.
Implementation details of embodiments of the present invention are described in more detail below.
Fig. 2 is an architecture diagram of an intelligent interactive system according to an embodiment of the present invention. As can be seen from fig. 2, the intelligent interactive system includes a context awareness module for identifying user behavior and scene characteristics, a data management module, a policy control module, an information prompt processing module, and an effect statistics feedback module. Wherein:
(1) The context awareness module: the module can acquire the environment and state information of the equipment operation at the current moment, including the behavior instruction record of the user. The context awareness module may include an active awareness module and a passive awareness module. An active sensing module: the equipment actively collects the operation behavior information and the content characteristic information of the user according to the program setting, and manages the information according to the preset rule for the subsequent steps. A passive sensing module: the module receives environment information and content characteristic labels from the outside of the equipment server, and carries out information sorting according to certain rules for calling of subsequent steps.
(2) A data management module: the module centrally manages data required for intelligent interaction, the management of which involves recording of data sources, downloading of related data, parsing of raw data and recording of content feature tags, and the data composition includes local data and remote data. Local data: local data refers to data that a device can directly call, analyze, and process locally. Cloud data: the cloud data is obtained by data expansion of the equipment through the internet cloud on the basis of the local data and is used for supplementing the local data or feeding back users.
(3) A policy control module: the module controls and judges the information prompting system according to the preset logic of the equipment system, and can also analyze the user behavior in real time according to the context sensing result to control information prompting. The strategy control module comprises a basic control module and a user behavior analysis module. A basic control module: the module controls the content of the related operation reminding and guiding according to the logic preset by the system. A user behavior analysis module: the presentation logic of the operational guidance and content prompt information for real-time environmental configuration and user behavior data is controlled by the section.
(4) The information prompt module: the module executes corresponding information display and user operation guidance after the system analysis and judgment, needs to acquire the stage of current user operation in real time and guides the user to use, reduces the use threshold of intelligent interaction, and improves the use experience.
(5) Effect statistics feedback module: the module collects the effect of the system after execution, mainly analyzes whether the user receives system prompt and the use error rate after system guidance, and feeds back the result to the strategy control module and the information prompt processing module to promote the self-optimization of the system, and makes optimization logic judgment according to the behavior of the user.
In addition, the embodiment of the invention can comprehensively analyze the current environment of the equipment, the user behavior data and the content characteristic data and judge the proper content prompt information and the related intelligent operation guide.
Fig. 3 is a schematic diagram of the working phase of intelligent interaction according to an embodiment of the invention.
As shown in fig. 3, the working phase of the intelligent interaction includes an analysis phase, a policy execution phase, and an effect feedback phase.
In the analysis phase: the context awareness module and the data management module distinguish service scenes of the equipment and manage data.
In the policy enforcement phase: the strategy control module judges whether the current equipment has guidable content according to the control strategy, and the information prompt processing module provides the most suitable guidance and feedback information according to the judged result and the result of the analysis stage.
In the effect feedback phase: and the effect statistical feedback module evaluates the effect executed by the current system according to a specific evaluation system and corrects the control strategy according to the evaluation result so as to ensure the use experience of the system.
The analysis phase, policy enforcement phase, and effect feedback phase are exemplarily described below.
Analysis stage (first) (perception, management):
in the analysis stage, the environment information of the device needs to be sensed, and the characteristics of the usage scenario, the user behavior and the target content in the current state are known. The application scene of the system can be divided into intelligent operation guide/new reminding on App/\ 8230 \ 8230;, and the like according to the type of data management. Under different use scenes, the system provides corresponding coping strategies.
Table 1 is a classification table of the scene type.
Figure BDA0002816198440000111
TABLE 1
In table 1:
for an interaction class scenario: the system pays attention to the operation behavior of the user, judges the use requirement of the user in real time according to the current behavior path, or predicts the event with the continuity characteristic according to the user behavior, and finally matches the event with a proper intelligent interaction scheme.
For a random event class scenario: the characteristics of the events emphasize uncertainty, and the possibility of occurrence anytime and anywhere indicates that the triggering mechanism of intelligent reminding and guiding of the events needs to be flexible and simple.
For a periodic event scenario: the occurrence of the events has certain predictability and presettiness, so that the triggering mechanism is reliable, but the information prompting mode is different from the two modes, and a richer feedback mode exists.
In addition, after the system acquires data, the original data needs to be analyzed and processed according to a certain logic and managed in a unified manner. The data management library may include a user-based behavior data management library and a content feature management library based on content attributes.
The data type of the behavior data may include system behavior, preset behavior, historical behavior, behavior strongly related to the current operation of the user, and the like according to the relevance of the target behavior to the user and the system.
The system behavior, the preset behavior, the historical behavior, and the behavior strongly related to the current operation of the user are described below.
(1) System behavior:
the data of the part mainly records the instruction operation record executed by the device in the current environment and the current attribute state of the device.
(2) The preset behavior is as follows:
after analyzing the big data of the user, the behavior library matches the commonly used behavior operation with the result brought by the behavior operation, records the part of behavior as the prediction basis of the behavior target of the user, and has strong relevance between the target and the operation or the operation combination.
(3) Historical behavior:
the device records the data of the behavior combination path of the user in a certain device state as the behavior basis, and predicts the target of the user by comparing the current device state with the user operation.
(4) The user operates strongly-related behaviors currently:
in a common behavior operation combination, when a complete task can be completed only when certain two or more operations are carried out according to fixed logic, the relationship of the partial operations is strong correlation; in addition, when there is a progressive logical association of content update brought about by a simple operation, the target content is referred to as a strongly-related content of the operation behavior. (if the business trip event reminder is set, the user is asked whether the ticket needs to be reserved).
(II) strategy execution stage (control, prompt):
the main function of the strategy execution stage is to judge whether to provide the prompt content suitable for the user in the current state according to the data result obtained in the preamble step, and decide how to guide the user to perform the proper intelligent operation or task processing.
First, the system provides adaptive decision logic for different application categories to achieve better use experience. Different application scenes correspond to different trigger conditions, and the system judges the logic content which should be followed currently according to the different trigger conditions.
In the interactive class scenario, the policy control module needs to first identify whether there is a reasonable intent in the user's current operation, such as: whether there is a simple repetitive behavior in the current operation, whether there is a relatively obvious direction of change in the system function in the simple behavior, and so on.
The embodiment of the invention can judge whether the simple logic interaction behavior exists or not by comparing the operation path with a certain length and the system behavior data in the data management module, and provides a corresponding shortcut entrance for the simple logic interaction behavior. Meanwhile, the embodiment of the invention can also analyze and compare the currently recorded operation path with the preset behaviors, the historical behaviors and the strongly-related behaviors in the behavior database, predict the behavior target of the user by combining the current equipment environment state and provide a convenient intelligent interaction mode.
FIG. 4 is a schematic diagram of guided interaction of an interaction class scene according to an embodiment of the present invention.
As shown in fig. 4, the user operation paths of the user in N steps are recorded in real time, and are compared, analyzed and judged whether the operation is simple or not, whether a predictable explicit target exists or not is judged, and corresponding feedback is provided to the user according to different situations.
FIG. 5 is a flow diagram of a process for event queuing according to an embodiment of the present invention.
Based on the flow shown in fig. 5, behavior information of the user can be captured. Firstly, a system sets a timer to monitor whether an input event exists at present, resets the timer when the input event exists, and records the input event to an event queue; detecting whether the state of the system or the APP is changed, and if so, storing the corresponding system/APP event into a queue; and detecting whether the timer is overtime, if not, waiting for the next system input, and if so, indicating that the system enters a temporary stable state. Then, the input content in the temporary stable state is analyzed, and a corresponding content event is generated and stored in an event queue. The event queue represents a piece of behavior data of the user, and the data is stored and used as the prediction data of a user behavior model in the future.
After the behavior data of the user is obtained, the RNN model is preferably used to construct a behavior model of the user.
FIG. 6 is a diagram illustrating RNN model-based event prediction according to an embodiment of the present invention.
As shown in fig. 6, assuming that the length of the user behavior data is n, the first n-1 user behavior data in each piece of user behavior data is used as input, the nth user behavior data is used as output, and training is performed to obtain a corresponding user behavior model. Then, by using the user behavior model, the event to be generated next can be predicted based on the generated event, so as to obtain the target prediction result of the current user behavior.
When the system judges that the current operation behavior of the user has a certain logic direction or has a definite purpose, the information prompt processing module quickly matches corresponding intelligent operation and quickly realizes the guidance information of the operation according to the tendency and the purpose.
Table 2 is a schematic table of the matching manner of the behavioral target and the intelligent operation guidance.
Figure BDA0002816198440000141
TABLE 2
Different from interactive scenes, the trigger mechanism of random events and periodic events is more definite, and when the system detects that the trigger condition is met, the system determines that the trigger is effective once, but the attributes of the events are richer and have more possibility of repeated triggering, so that the event control strategy of the part is concentrated on the management of the triggering. Wherein:
triggering: refers to the condition factors that cause the formation of the guidance system; and (3) circulation: the method is a repeated mechanism for guiding a system at a certain function, and the guiding is judged to be 1 to N times according to the type of guiding content; the guide priority is as follows: the guide content is judged to be preferentially displayed and sorted when the same guide condition is satisfied.
FIG. 7 is a diagram illustrating a guiding interaction of a random event according to an embodiment of the present invention.
In consideration of strong accidental property of random events, the logic definition for triggering intelligent guidance of the random events is concentrated on the change of certain attribute characteristic of the events, when the attribute of the events is updated and the update has certain influence on users, the system reminds the users in different invasive modes according to the influence degree of time change on the users, guides the users to carry out expected operation and converts the events.
The core of the object system judgment comprises two parts, firstly, a strategy control module updates and changes the judgment of a trigger system according to the discoverable event attribute, so the key point of the link lies in the definition of the event attribute; secondly, after the system is triggered by a random event, the system can judge the use scene of a user according to the current equipment state, select a proper information reminding and guiding mechanism, and integrate information feedback content by combining the information such as the number, the priority, the category and the like of the event.
FIG. 8 is a diagram illustrating a guidance interaction for a periodic event according to an embodiment of the present invention.
The periodic event is different from the random event, and the triggering mechanism of the event has certain inherent logic which can follow, such as time node, equipment state, operation environment and the like. The judgment of the events needs to be carried out by combining a certain combined trigger condition, the time interval of the events which is allowed to be reminded is often wider, and the feedback of the system intelligent guidance can be changed according to a certain rule in the allowed time interval.
For random and periodic events, corresponding policies may be set to satisfy intelligent guidance for different events.
Table 3 is an intelligent boot policy table for random and periodic events.
Figure BDA0002816198440000151
/>
Figure BDA0002816198440000161
TABLE 3
Simultaneously, in order to guarantee user's use experience at the in-process that intelligence guide and information were reminded, the core principle is: the definition system can realize reasonable guidance of the user through the specific setting by complementary optimization of the strategy under the condition of more events to be guided.
Table 4 is a guidance alert schematic for random events (application updates) and periodic events (holidays).
Figure BDA0002816198440000162
/>
Figure BDA0002816198440000171
TABLE 4
In addition, in the aspect of intelligent interaction, the embodiment of the invention can provide a multidimensional interaction mode for the user, and intelligently selects the intelligent interaction which is most suitable for the user according to different equipment scenes.
Table 5 is a schematic table of the intelligent guidance feedback.
Figure BDA0002816198440000172
TABLE 5
(III) effect feedback stage:
in the effect feedback stage, the embodiment of the invention monitors the effect of the system by the technology, and establishes a related evaluation system to evaluate the system performance in order to evaluate the system performance on the value of intelligent guidance, the experience of the guidance mode and the accuracy of the intelligent guidance. The effect feedback mainly comprises the following three parameters:
(a) System trigger timing:
in order to determine whether the triggering time of the intelligent system is proper, the embodiment of the invention collects the reaction of the user after receiving the content of the system guidance prompt, and adjusts the time for triggering guidance of the interactive content according to the reaction of the user so as to ensure the use experience of the user. If the user actively selects the closing prompt when receiving the feedback, the user is proved to have no requirement when the time judgment is too early, or the result of the time judgment is too early to meet the requirement of the user according to the insufficient judgment result; if the user quickly jumps to the intelligent guidance within 3 seconds after receiving the prompt, the opportunity is indicated to have a space in advance; if the user enters after receiving the guide for 3-N seconds, the prompt supplements part of the information needed by the user and has a certain value for the selection of the user, and the system maintains the state; and if the user still does not react after N seconds, judging that the current scene has no reference value.
(b) Interruption rate of the boot process:
the index indicates whether the feedback scheme provided by the system for the user is reasonable or not and whether the user accepts the intelligent interaction mode provided by the feedback or not. The embodiment of the invention evaluates the index by using the ratio of the number of times of finishing the feedback guidance of the system to the number of times of triggering the system.
(c) Matching degree of guidance content:
and evaluating whether the content provided by the guidance mode conforms to the behavior habit of the user, comparing the guidance content with the actual execution content of the user, if a large difference exists, analyzing the behavior habit of the user and correcting the content to match the behavior habit, or detecting that the user closes the prompt within 3-N seconds after receiving the guidance information, so that the mode that the content transmits the information is difficult for the user to receive.
FIG. 9 is a diagram illustrating adjusting a guidance interaction based on feedback information, according to an embodiment of the invention. In fig. 9, the guidance timing and content are adjusted based on the feedback information.
In the timing adjustment, the display timing is mainly adjusted (delayed/advanced). In the content adjustment, analysis is carried out according to the content analysis result of the current event and the model of the predicted event, corresponding target time is provided, confidence evaluation is carried out on the result, and finally guide content is generated according to the target event. Fig. 10 is a diagram illustrating adjustment of guidance content according to an embodiment of the present invention.
FIG. 11 is a diagram illustrating an overall process of guiding a user to perform intelligent interaction according to an embodiment of the present invention.
The key points for guiding the user to perform intelligent interaction in the embodiment of the invention comprise: modeling user behaviors and constructing a behavior prediction model; automatic scene discovery and guidance content guidance; the feedback-based predictive model and the pilot control are self-adjusting.
Embodiments of the present invention are described below with reference to specific examples.
The embodiment of the invention mainly analyzes the use scene of the user, realizes the prediction of the user demand target by establishing the user portrait or triggering the corresponding mechanism condition, and judges whether the corresponding intelligent interaction mode can be provided so as to reduce the behavior cost of the user and improve the use experience of the equipment.
FIG. 12 is a diagram illustrating guided interactions for simple interactions, according to an embodiment of the invention.
As shown in fig. 12, in the simple interaction scenario, the following guided interaction steps are included:
step 1: in the current scenario, a user controls the smart tv using a remote control.
Step 2: the user frequently adjusts the volume through the volume adjustment button.
And step 3: the system background identifies n behaviors of the user: a1 to An.
And 4, step 4: comparing the behavior patterns of the user by the system: a1= A2= \8230 = An, A1+ A2+ A3= A4+ A5+ A6= \8230;, 8230;.
And 5: the operation behavior logic of the current user is judged to be simple, and the target is clear.
Step 6: the user target is obtained as follows: the volume is adjusted to a suitable level.
And 7: whether the current equipment has a proper intelligent interaction mode or not is detected, namely whether the current equipment has a voice interaction function or not is detected.
And 8: feedback content is prepared.
And step 9: and detecting whether the use scene accords with the guide opportunity, for example, judging that the user still performs volume adjustment operation and then determining that the use scene accords with the guide opportunity.
Step 10: pushing intelligent guidance content, such as voice prompts: "you can try to tell me directly: the volume is adjusted to XXX ".
FIG. 13 is a diagram illustrating a guiding interaction for strongly relevant user behavior according to an embodiment of the present invention.
As shown in fig. 13, in the strongly correlated user behavior, the following guiding interaction steps are included:
step 1: in the current scene, the user adjusts the environment
Step 2: user control intelligent air conditioner to sleep mode "
And step 3: the user then executes a series of actions, such as locking the door, closing the curtain, reducing the illumination intensity, 8230, and the like.
And 4, step 4: the system creates a user representation: the current user will complete a series of related operations after performing air conditioning adjustment in the time period.
And 5: performing scene recognition-, the recognition result is: user environment adjustment in current time period
Step 6: user control intelligent air conditioner to sleep mode "
And 7: the system detects "user behavior a- > adjust air conditioner mode to sleep" as a trigger condition for a sequence of related behaviors.
And step 8: the user target is determined, and at this time, a series of operations are completed to a certain extent.
And step 9: whether the use scene meets the guiding condition is detected, for example, when no interference factor exists in the environment of the detection equipment, the guiding condition is determined to be met.
Step 10: push intelligent guidance content, such as voice push: "will you adjust (environmental parameters), confirm that execution is needed? "
Step 11: the user replies to the device directly with a voice to effect the confirmation.
Fig. 14 is a schematic diagram of guidance interaction for dynamically adjusting feedback opportunities to meet user expectations according to an embodiment of the present invention.
As shown in fig. 14, the interactive process of the scenario includes:
step 1: in the current scenario, the user uses the smart device to complete the specific operation.
Step 2: and recording the user behavior by the system.
And step 3: comparing the current behavior with the user historical behavior record to predict user behavior targets \8230; (repeated real-time comparison).
And 4, step 4: the system detects that the user turns on the fitness APP.
And 5: and performing scene recognition to find that the current user needs to perform fitness activities.
And 6: the system retrieves the user's historical behavior in which the workout and music playback would be performed simultaneously.
And 7: the system prompts the user "you can try to say open music" 3s after the fitness application is opened.
And 8: the user uses the voice command "open music" (the system determines that the time interval between the user's execution and the presentation is less than 3 s)
And step 9: and feeding back the data less than 3s to the control center, and judging that the prompt in the current scene has a space ahead of time.
Step 10: the system adjusts the feedback time in the specific scene and advances the prompting time.
Step 11: the system again detects that the user turns on the fitness APP.
Step 12: and performing scene recognition to find that the current user needs to perform fitness activities.
Step 13: the system prompts the user "you can try to say open music" before executing the fitness application to open.
Therefore, compared with the prior art, the method has the main advantages that a proper intelligent interaction mode of the user can be provided by intelligently identifying the use state of the user, whether the current guidance scheme is proper or not can be judged by collecting the reaction of the user, and the feedback content library more suitable for the current user is generated by adjusting the control strategy in real time and intelligently.
Based on the above description, the embodiment of the invention also provides an intelligent interaction device.
Fig. 15 is a block diagram of an intelligent interaction device according to the present invention.
As shown in fig. 15, the intelligent interaction device includes:
a first determining module 1501, configured to determine a user target in an application scenario;
a second determining module 1502 for determining guidance content associated with the user objective and an interaction means associated with the application scenario;
a providing module 1503 is configured to provide the guidance content based on the interaction manner.
In an embodiment, the application scenario includes an interaction class scenario, and the first determining module 1501 is configured to collect user behavior data in the application scenario, and determine the user target based on the user behavior data. In one embodiment, the application scenario includes a random event class scenario, and the second determining module 1502 is configured to monitor characteristics of a specified object in the application scenario; when the characteristics change, determining an interaction triggering opportunity; or the application scenario includes a periodic event class scenario, the second determining module 1502 is configured to monitor a periodic trigger value in the application scenario; and when the periodic trigger value appears, determining an interaction trigger opportunity.
In one embodiment, the first determining module 1501 is configured to input user behavior data comprising a plurality of temporally related user actions to a trained neural network model to output a user goal corresponding to the user behavior data from the neural network model.
In one embodiment, the first determining module 1501 is further configured to provide historical user behavior data to a neural network model to train the neural network model, where the historical user behavior data includes n user historical actions related in time, the 1 st user action through the n-1 st user action are used as inputs to the neural network model, the n user action is used as an output of the neural network model, and n is a positive integer of at least 2.
In one embodiment, when the user target is to adjust the device volume, the interaction mode includes a voice prompt directly adjusting to a target volume value; when the user target is the channel of the adjusting device, the interactive mode comprises that a voice prompt is directly adjusted to a target channel value; when the user aims at viewing the content of the equipment, the interactive mode comprises that voice prompt is used for directly viewing the target content or gesture prompt is used for directly viewing the target content; when the user target is to start an application, the interaction mode comprises providing a shortcut entrance of the application; and when the user target is to access a link, the interaction mode comprises providing a shortcut entrance of the link.
In one embodiment, an adjustment module 1504 is further included for detecting a user action with respect to the guidance content; and adjusting the interaction triggering opportunity or the guiding content of the interaction mode based on the user action.
In one embodiment, the adjusting module 1504 is configured to advance the interaction triggering opportunity when the user action is issued within a first predetermined time of receiving the guidance content; maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content; lagging the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content; wherein the first predetermined time is less than a second predetermined time, which is less than a third predetermined time.
In one embodiment, the adjustment module 1504 is configured to maintain the guidance content when the user action matches the guidance content; when the user action does not match the guidance content, adjusting the guidance content so that the guidance content matches the user action.
The embodiment of the invention also provides the intelligent equipment with the memory-processor architecture.
Fig. 16 is a block diagram of a smart device having a memory-processor architecture in accordance with the present invention.
As shown in fig. 16, the smart device having a memory-processor architecture includes: a processor 1601 and a memory 1602; in which an application program is stored in the memory 1602 for execution by the processor 1601, for causing the processor 1601 to perform the method of intelligent interaction as described in any of the above. The memory 1602 may be implemented as various storage media such as an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash memory (Flash memory), and a Programmable Read Only Memory (PROM). The processor 1601 may be a Central Processing Unit (CPU). Illustratively, the processor 1601 includes at least one CPU, semiconductor-based microprocessor, programmable Logic Device (PLD), and the like. Exemplary PLDs include Application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), programmable Array Logic (PALs), complex Programmable Logic Devices (CPLDs), and Erasable Programmable Logic Devices (EPLDs). Processor 1601 may include multiple processing elements integrated in a single device or distributed across devices. The processor sources may process instructions sequentially, simultaneously, or partially simultaneously.
In summary, in the embodiment of the present invention, a user target in an application scenario is determined; determining guidance content associated with a user target and an interaction mode associated with an application scenario; and providing the guide content based on the interactive mode. Therefore, the embodiment of the invention can provide the interaction mode and the guide content which are in accordance with the expectation for the user by analyzing the application scene. In addition, the embodiment of the invention also collects the user reaction to judge whether the current guidance scheme is proper, adjusts the control strategy in real time and intelligently generates the guidance content more suitable for the current user.
It should be noted that not all steps and modules in the above flows and structures are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The division of each module is only for convenience of describing adopted functional division, and in actual implementation, one module may be implemented by multiple modules, and the functions of multiple modules may also be implemented by the same module, and these modules may be located in the same device or in different devices.
The hardware modules in the various embodiments may be implemented mechanically or electronically. For example, a hardware module may include permanent circuitry or logic devices that are specially designed to perform certain operations. A hardware module may also include programmable logic devices or circuits (e.g., including a general-purpose processor or other programmable processor) that are temporarily configured by software to perform certain operations.
The present invention also provides a machine-readable storage medium storing instructions for causing a machine to perform a method as described herein. Specifically, a system or an apparatus equipped with a storage medium on which a software program code that realizes the functions of any of the embodiments described above is stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program code stored in the storage medium. Further, part or all of the actual operations may also be performed by an operating system or the like operating on the computer by instructions based on the program code. The functions of any of the above-described embodiments may also be implemented by writing the program code read out from the storage medium to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then causing a CPU or the like mounted on the expansion board or the expansion unit to perform part or all of the actual operations based on the instructions of the program code.
Embodiments of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer or the cloud by a communication network.
"exemplary" means "serving as an example, instance, or illustration" herein, and any illustration, embodiment, or steps described as "exemplary" herein should not be construed as a preferred or advantageous alternative. For the sake of simplicity, the drawings are only schematic representations of the relevant parts of the invention, and do not represent the actual structure of the product. Moreover, in the interest of brevity and understanding, only one of the components having the same structure or function is illustrated schematically or designated in some of the drawings. In this document, "a" does not mean that the number of the relevant portions of the present invention is limited to "only one", and "a" does not mean that the number of the relevant portions of the present invention "more than one" is excluded. In this document, "upper", "lower", "front", "rear", "left", "right", "inner", "outer", and the like are used only to indicate relative positional relationships between relevant portions, and do not limit absolute positions of the relevant portions.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. An intelligent interaction method, comprising:
determining a user target in an application scene;
determining guidance content associated with the user target and an interaction mode associated with the application scenario;
providing the guidance content based on the interaction mode;
further comprising:
detecting a user action with respect to the guidance content;
adjusting an interaction trigger opportunity of the interaction mode or the guide content based on the user action;
the adjusting the interaction trigger opportunity of the interaction mode based on the user action comprises:
advancing the interaction trigger opportunity when the user action is issued within a first predetermined time of receiving the guidance content;
maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content;
lagging the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content;
wherein the first predetermined time is less than a second predetermined time, which is less than a third predetermined time.
2. The intelligent interaction method of claim 1,
the application scenario comprises an interaction class scenario, wherein the determining of the user target in the application scenario comprises: and acquiring user behavior data in an application scene, and determining the user target based on the user behavior data.
3. The intelligent interaction method of claim 1,
the application scene comprises a random event class scene, wherein the determining of the interaction mode associated with the application scene comprises: monitoring characteristics of a specified object in the application scene; when the characteristics change, determining an interaction triggering opportunity; or
The application scene comprises a periodic event class scene, wherein the determining of the interaction mode associated with the application scene comprises: monitoring a periodic trigger value in the application scene; and when the periodic trigger value occurs, determining an interaction trigger opportunity.
4. The intelligent interaction method of claim 2, wherein the determining the user goal based on user behavior data comprises:
user behavior data comprising a plurality of temporally related user actions is input to a trained neural network model to output, by the neural network model, a user objective corresponding to the user behavior data.
5. The intelligent interaction method of claim 4, further comprising a process of pre-training the neural network model, the process comprising:
providing historical user behavior data to a neural network model to train the neural network model, wherein the historical user behavior data comprises n user historical actions correlated in time, the 1 st to the n-1 st user actions being input to the neural network model, the n user actions being output from the neural network model, and n is a positive integer of at least 2.
6. The intelligent interaction method of claim 2,
when the user target is to adjust the volume of the equipment, the interactive mode comprises that a voice prompt is directly adjusted to a target volume value;
when the user target is the channel of the adjusting device, the interactive mode comprises that a voice prompt is directly adjusted to a target channel value;
when the user aims at viewing the content of the equipment, the interactive mode comprises that voice prompt is used for directly viewing the target content or gesture prompt is used for directly viewing the target content;
when the user target is to start an application, the interaction mode comprises providing a shortcut entrance of the application;
and when the user target is to access a link, the interaction mode comprises providing a shortcut entrance of the link.
7. The intelligent interaction method of claim 1, wherein the adjusting the guidance content based on the user action comprises:
maintaining the guidance content when the user action matches the guidance content;
when the user action does not match the guidance content, adjusting the guidance content such that the guidance content matches the user action.
8. An intelligent interaction device, comprising:
the first determining module is used for determining a user target in an application scene;
a second determination module to determine guidance content associated with the user target and an interaction means associated with the application scenario;
a providing module for providing the guidance content based on the interaction mode; further comprising:
an adjustment module to detect a user action for the guidance content; adjusting an interaction triggering opportunity or guiding content of the interaction mode based on the user action;
the adjusting module is used for advancing the interaction triggering time when the user action is sent within a first preset time when the guiding content is received; maintaining the interaction trigger opportunity when the user action is issued within a second predetermined time of receiving the guidance content; lagging the interaction trigger opportunity when the user action is issued within a third predetermined time of receiving the guidance content; wherein the first predetermined time is less than a second predetermined time, which is less than a third predetermined time.
9. The intelligent interaction device of claim 8,
the application scene comprises an interactive scene, and the first determining module is used for acquiring user behavior data in the application scene and determining the user target based on the user behavior data.
10. The intelligent interaction device of claim 8,
the application scene comprises a random event class scene, and the second determination module is used for monitoring the characteristics of a specified object in the application scene; when the characteristics change, determining an interaction triggering opportunity; or
The application scene comprises a periodic event scene, and the second determining module is used for monitoring a periodic trigger value in the application scene; and when the periodic trigger value occurs, determining an interaction trigger opportunity.
11. The intelligent interaction device of claim 9,
the first determination module is configured to input user behavior data comprising a plurality of temporally related user actions to a trained neural network model to output, by the neural network model, a user objective corresponding to the user behavior data.
12. The intelligent interaction device of claim 11,
the first determining module is further configured to provide historical user behavior data to a neural network model to train the neural network model, where the historical user behavior data includes n user historical actions related in time, the 1 st user action to the (n-1) th user action are used as inputs of the neural network model, the nth user action is used as an output of the neural network model, and n is a positive integer of at least 2.
13. The intelligent interaction device of claim 9,
when the user target is to adjust the volume of the equipment, the interactive mode comprises that a voice prompt is directly adjusted to a target volume value;
when the user target is the channel of the adjusting device, the interactive mode comprises that a voice prompt is directly adjusted to a target channel value;
when the user target is to view the equipment content, the interaction mode comprises voice prompt for directly viewing the target content or gesture prompt for directly viewing the target content;
when the user target is to start an application, the interaction mode comprises providing a shortcut entrance of the application;
and when the user target is to access a link, the interaction mode comprises providing a shortcut entrance of the link.
14. The intelligent interaction device of claim 8,
the adjusting module is used for maintaining the guiding content when the user action is matched with the guiding content; when the user action does not match the guidance content, adjusting the guidance content so that the guidance content matches the user action.
15. A smart device comprising a processor and a memory;
the memory has stored therein an application program executable by the processor for causing the processor to perform the intelligent interaction method of any one of claims 1 to 7.
16. A computer-readable storage medium having computer-readable instructions stored therein for performing the intelligent interaction method of any of claims 1 to 7.
CN202011413811.6A 2020-12-04 2020-12-04 Intelligent interaction method and device and intelligent equipment Active CN112380334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011413811.6A CN112380334B (en) 2020-12-04 2020-12-04 Intelligent interaction method and device and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011413811.6A CN112380334B (en) 2020-12-04 2020-12-04 Intelligent interaction method and device and intelligent equipment

Publications (2)

Publication Number Publication Date
CN112380334A CN112380334A (en) 2021-02-19
CN112380334B true CN112380334B (en) 2023-03-24

Family

ID=74590523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011413811.6A Active CN112380334B (en) 2020-12-04 2020-12-04 Intelligent interaction method and device and intelligent equipment

Country Status (1)

Country Link
CN (1) CN112380334B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105690385A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Application calling method and device based on intelligent robot
CN107665230A (en) * 2017-06-21 2018-02-06 海信集团有限公司 Training method and device for the users' behavior model of Intelligent housing
CN108153879A (en) * 2017-12-26 2018-06-12 爱因互动科技发展(北京)有限公司 The method and device of recommendation information is provided a user by human-computer interaction
CN108983620A (en) * 2018-06-22 2018-12-11 联想(北京)有限公司 A kind of control method, device and electronic equipment
CN110839175A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Interaction method based on smart television, storage medium and smart television

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105690385A (en) * 2016-03-18 2016-06-22 北京光年无限科技有限公司 Application calling method and device based on intelligent robot
CN107665230A (en) * 2017-06-21 2018-02-06 海信集团有限公司 Training method and device for the users' behavior model of Intelligent housing
CN108153879A (en) * 2017-12-26 2018-06-12 爱因互动科技发展(北京)有限公司 The method and device of recommendation information is provided a user by human-computer interaction
CN108983620A (en) * 2018-06-22 2018-12-11 联想(北京)有限公司 A kind of control method, device and electronic equipment
CN110839175A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Interaction method based on smart television, storage medium and smart television

Also Published As

Publication number Publication date
CN112380334A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US7519564B2 (en) Building and using predictive models of current and future surprises
US7778715B2 (en) Methods and systems for a prediction model
US7698055B2 (en) Traffic forecasting employing modeling and analysis of probabilistic interdependencies and contextual data
US20100318576A1 (en) Apparatus and method for providing goal predictive interface
CN112164401B (en) Voice interaction method, server and computer-readable storage medium
CN105635824A (en) Personalized channel recommendation method and system
CN101256591A (en) Principles and methods for personalizing newsfeeds via an analysis of information novelty and dynamics
CN110545465B (en) Video playing method, terminal and storage medium
US9330317B2 (en) Systems and methods for multi-pass adaptive people counting
CN110992937A (en) Language offline recognition method, terminal and readable storage medium
KR20190095180A (en) An artificial intelligence apparatus for controlling auto stop system and method for the same
CN111125429A (en) Video pushing method and device and computer readable storage medium
KR20120092459A (en) Method for providing context aware service using mobile terminal and system thereof
CN112380334B (en) Intelligent interaction method and device and intelligent equipment
US11720231B2 (en) Vehicle having an intelligent user interface
US20240046931A1 (en) Voice interaction method and apparatus
CN108989894A (en) Method and apparatus for playing TV programme
Koychev Tracking changing user interests through prior-learning of context
CN111563989A (en) Intelligent control method of gate, gate and computer readable storage medium
CN115599260A (en) Intelligent scene generation method, device and system, storage medium and electronic device
KR102398006B1 (en) Self improving object recognition method and system through image capture
CN115662400A (en) Processing method, device and equipment for voice interaction data of vehicle machine and storage medium
US11704683B1 (en) Machine learning system, method, and computer program for household marketing segmentation
CN114928766A (en) System and method for automatically adjusting video clips
CN117850261A (en) Intelligent switch sensor data analysis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant