CN110597973A - Man-machine conversation method, device, terminal equipment and readable storage medium - Google Patents

Man-machine conversation method, device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN110597973A
CN110597973A CN201910880191.8A CN201910880191A CN110597973A CN 110597973 A CN110597973 A CN 110597973A CN 201910880191 A CN201910880191 A CN 201910880191A CN 110597973 A CN110597973 A CN 110597973A
Authority
CN
China
Prior art keywords
label
preset
user
determining
conversation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910880191.8A
Other languages
Chinese (zh)
Other versions
CN110597973B (en
Inventor
戴世昌
张军
闫羽婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910880191.8A priority Critical patent/CN110597973B/en
Priority claimed from CN201910880191.8A external-priority patent/CN110597973B/en
Publication of CN110597973A publication Critical patent/CN110597973A/en
Application granted granted Critical
Publication of CN110597973B publication Critical patent/CN110597973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a man-machine conversation method, a man-machine conversation device, terminal equipment and a readable storage medium, which are used for improving user experience in a man-machine conversation process. The method comprises the following steps: acquiring conversation characteristics of a user in a man-machine conversation process; determining a first label of a user according to the conversation feature; determining a second label matched with the first label in a preset label set according to a preset label matching rule; determining a preset material corresponding to a second label according to the corresponding relation between the second label and the preset material; and displaying the preset material through a preset virtual shape.

Description

Man-machine conversation method, device, terminal equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for man-machine interaction, a terminal device, and a readable storage medium.
Background
With the development of artificial intelligence technology, man-machine conversation is applied to more and more scenes. For example, some avatars are set in the terminal, and a user can perform man-machine objects with virtual shapes, and the virtual shapes can imitate human actions or expressions.
Currently, the main method of human-computer conversation is that a terminal detects a voice text input by a user, and when a preset text appears, an avatar reacts correspondingly, for example, the avatar may perform a preset action or feed back some voices.
However, the above method can only realize basic conversation, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a man-machine conversation method, a man-machine conversation device, terminal equipment and a readable storage medium, which are used for improving user experience in a man-machine conversation process.
In view of this, a first aspect of the embodiments of the present application provides a method for man-machine interaction, including:
acquiring conversation characteristics of a user in a man-machine conversation process;
determining a first label of a user according to the conversation feature;
determining a second label matched with the first label in a preset label set according to a preset label matching rule;
determining a preset material corresponding to a second label according to the corresponding relation between the second label and the preset material;
and displaying the preset material through a preset virtual image.
In a first implementation manner of the first aspect of the embodiment of the present application, the determining a first tag of a user according to the dialog feature includes:
analyzing the emotion of the user according to the conversation characteristics;
and determining the first label according to the emotion analysis result.
In a second implementation manner of the first aspect of the embodiment of the present application, the determining a first tag of the user according to the dialog feature includes:
if the input speed is lower than a preset first speed, determining that the first label of the user is a preset third label;
and if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In a third implementation manner of the first aspect of the embodiments of the present application, the determining a first tag of the user according to the dialog feature includes:
if the interval time is less than the preset first time, determining that the first label of the user is a preset fifth label;
and if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In a fourth implementation manner of the first aspect of the embodiment of the present application, when the dialog feature includes text content input by the user, the determining, according to the dialog feature, the first tag of the user includes:
extracting key words in the text content input by the user;
and determining a first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In a fifth implementation manner of the first aspect of the embodiment of the present application, the determining, according to a preset tag matching rule, a second tag that matches the first tag in a preset tag set includes:
and determining a preset seventh label which is a synonym with the first label in a preset label set, and taking the preset seventh label as a second label.
In a sixth implementation manner of the first aspect of the embodiment of the present application, the preset material includes one or more of an action material, an expression material, and an audio material.
In a seventh implementation manner of the first aspect of the embodiments of the present application, the method is applied to a terminal, where the terminal is a block node device in a block chain.
A second aspect of the embodiments of the present application provides an apparatus for human-computer conversation, including:
the acquisition unit is used for acquiring conversation characteristics of a user in a man-machine conversation process;
the first determining unit is used for determining a first label of the user according to the conversation characteristics;
the matching unit is used for determining a second label matched with the first label in a preset label set according to a preset label matching rule;
the second determining unit is used for determining the preset material corresponding to the second label according to the corresponding relation between the second label and the preset material;
and the display unit is used for displaying the preset materials through preset virtual images.
In a first implementation manner of the second aspect of the embodiment of the present application, the first determining unit is configured to:
analyzing the emotion of the user according to the conversation characteristics;
and determining the first label according to the emotion analysis result.
In a second implementation manner of the second aspect of the embodiment of the present application, the dialog feature includes an input speed of the user, and the first determining unit is configured to:
if the input speed is lower than a preset first speed, determining that the first label of the user is a preset third label;
and if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In a third implementation manner of the second aspect of the embodiment of the present application, the dialog feature includes an interval time of the user response, and the first determining unit is configured to:
if the interval time is less than the preset first time, determining that the first label of the user is a preset fifth label;
and if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In a fourth implementation manner of the second aspect of the embodiment of the present application, when the dialog feature includes text content input by the user, the first determining unit is configured to:
extracting key words in the text content input by the user;
and determining a first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In a fifth implementation manner of the fifth aspect of the embodiment of the present application, the matching unit is configured to:
and determining a preset seventh label which is a synonym with the first label in a preset label set, and taking the preset seventh label as a second label.
In a sixth implementation manner of the second aspect of the embodiment of the present application, the preset material includes one or more of an action material, an expression material, and an audio material.
In a seventh implementation manner of the second aspect of the embodiment of the present application, the method is applied to a terminal, and the terminal is a block node device in a block chain.
A third aspect of the embodiments of the present application provides a terminal device, including: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is configured to execute a program in the memory to perform the method according to any one of the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, comprising instructions, which, when executed on a computer, cause the computer to perform the method according to any one of the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, which includes computer software instructions, which can be executed by a processor to perform the method according to any one of the first aspect of embodiments of the present application.
According to the technical scheme, the embodiment of the application has the following advantages:
firstly, acquiring a conversation characteristic of a user in a man-machine conversation process, wherein the conversation characteristic can comprise a plurality of dimensions, such as input speed of the user, interval time of response of the user, text content input by the user, tone and tone input by voice input of the user and the like, so that the conversation characteristic can cover various characteristics in the man-machine conversation process; then, a first label of the user is determined according to the conversation features, and the first label determined according to the conversation features can well represent the current state of the user due to the fact that the aspects covered by the conversation features are more; determining a second label matched with the first label in the preset label set according to a preset label matching rule; determining a preset material corresponding to the second label according to the corresponding relation between the second label and the preset material; and finally, displaying preset materials through a preset virtual image, wherein the preset materials comprise one or more of action materials, expression materials and audio materials, and can be more vividly and closely conversed with a user, so that the user experience of the user is improved.
Drawings
FIG. 1 is a schematic diagram of an application scenario of a method of human-computer conversation in an embodiment of the present application;
FIG. 2 is a schematic diagram of an architecture of a human-machine dialog system according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an embodiment of a method for human-machine interaction according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a first embodiment showing preset materials in the embodiment of the present application;
FIG. 5 is a schematic diagram of a second embodiment showing preset materials in the embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of a data sharing system in an embodiment of the present application;
FIG. 7 is a block diagram illustrating an embodiment of the present invention;
FIG. 8 is a diagram illustrating a process of generating a new block in an embodiment of the present application;
FIG. 9 is a schematic diagram of an embodiment of a human-machine interaction apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a man-machine conversation method, a man-machine conversation device, terminal equipment and a readable storage medium, which are used for improving user experience in a man-machine conversation process.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that the present application applies to human-machine dialog scenarios, and in particular, to scenarios in which a user has a virtual character dialog with a terminal. Referring to fig. 1, an application scenario of the method of human-computer conversation in the embodiment of the present application is schematically illustrated. Fig. 1 includes an avatar with which a user can have a man-machine conversation through a terminal, the avatar simulating a series of human actions, expressions, languages, etc., to be given feedback.
For easy understanding, the present application provides a method for man-machine interaction, which is applied to a man-machine interaction system shown in fig. 2, please refer to fig. 2, and fig. 2 is an architecture diagram of the man-machine interaction system in the embodiment of the present application, and as shown in the figure, the man-machine interaction system includes a variety of terminals, wherein the terminals include, but are not limited to, a mobile phone, a tablet computer, a notebook computer, and a palm computer in fig. 2. In addition, the terminal may include an intelligent terminal disposed in a service hall, an entertainment place, or the like, and may further include an intelligent home device. An avatar, which may be a virtual character and a virtual motion, may be built in a terminal of the human-machine dialog system through an application.
The user can input voice or text on the terminal, after the terminal collects the voice or text, the virtual image can make corresponding feedback according to the content of the voice or text, in order to make the feedback content more fit with the context of the man-machine conversation and the state of the user, the embodiment of the application provides a man-machine conversation method, and the method is specifically described below.
Referring to fig. 3, an embodiment of a method for human-machine interaction according to an embodiment of the present application is illustrated. In this embodiment, the method comprises:
101, acquiring the conversation characteristics of the user in the process of man-machine conversation.
First, it should be noted that the dialog features in the embodiments of the present application are not limited, and may include any features that can represent the context of the man-machine dialog and the user state.
For example, the dialog features may include instructions entered by the user, text content entered by the user, speech content entered by the user, intonation and mood entered by the user, speed of input by the user, and time intervals between user responses. Wherein, the tone and tone can be represented by the volume of the user voice; the input speed of the user may be a speed of inputting text content or a speed of inputting voice content, which is not limited in the embodiment of the present application; the interval time of the user response refers to the time interval from the time when the avatar in the terminal makes the feedback to the time when the terminal receives the content input by the user.
Since the obtaining method corresponds to the dialog feature, the embodiment of the present application does not specifically limit the obtaining method. Specifically, when the dialog feature is text content, the method for acquiring the dialog feature may be to extract corresponding text content from the input text; when the conversation feature is voice content, the method of acquiring the conversation feature may be to collect voice data of the user and then extract the voice content through a voice recognition technology.
102, determining a first label of the user according to the conversation characteristics.
It should be noted that the first tag includes multiple types, and may be a behavior tag, for example, when the dialog feature is the input speed of the user, the input speed of the user may be slow for the first tag; also, emotion table tags may be used, for example, where the dialog feature is the text content entered by the user, the first tag is a casual. Since there are various dialog features and various first tab pages, there are various methods for determining the first tab according to the dialog features, and the method for determining the first tab in the embodiment of the present application is not particularly limited.
And 103, determining a second label matched with the first label in the preset label set according to a preset label matching rule.
It should be noted that the second tag is used for representing the preset material, and the first tag is used for representing the state of the user, so the first tag and the second tag may be the same or different, and therefore matching needs to be performed through the tag matching rule.
Specifically, when the text content input by the user is "do not feel happy today", that is, the dialog feature is "do not feel happy today", accordingly, the first label may be "hurry", and in order to improve the user experience, the second label may be "comfort", and then the avatar may present a preset material corresponding to "comfort" to the user. In this scenario, the first tag and the second tag are related, but not identical, and therefore a tag matching rule is required to associate the first tag with the second tag.
And 104, determining the preset material corresponding to the second label according to the corresponding relation between the second label and the preset material.
It should be noted that, in order to improve user experience, the preset materials may be as many as possible, and a corresponding second tag is set for each preset material, specifically, the second tags corresponding to the same type of preset materials may be the same, that is, one second tag may correspond to multiple preset materials; the second label can be a keyword, the keyword can represent preset materials, and the second label can also be other marks such as numbers and letters.
And 105, displaying the preset material through the preset virtual image.
It should be noted that the avatar may be an avatar and a virtual action, and the display form of the avatar in the embodiment of the present application is not specifically limited. The preset material may include one or more of an action material, an expression material, and an audio material. The preset materials are different, and the corresponding display modes are also different. For example, when the preset material is an action material, the action material can be displayed through a specific picture; when the preset material is an audio material, the audio material can be displayed through the audio output device.
Taking a preset material as an action material as an example, please refer to fig. 4, a first embodiment of the present application shows a schematic view of the preset material, and as shown in fig. 4, an action material of "call calling" is shown in an avatar; referring to fig. 5, a schematic view of a second embodiment showing preset materials in the embodiment of the present application, as shown in fig. 5, an avatar shows an action material of "order and departure".
For another example, if the second label is "comfort", the avatar may make a hug or play a comfort audio.
In the embodiment of the application, as the conversation characteristics can include various characteristics, the state and the context of the user in the man-machine conversation process can be better embodied, the determined preset materials are more in line with the state and the context of the user, the untimely preset materials are avoided, the preset materials are diversified in form, and can be displayed to the user in various modes such as vision and hearing, and the user experience in the man-machine conversation process is improved.
As can be seen from the foregoing description, the first tag may be an emotion tag or a behavior tag, and the following description will describe a process of determining the first tag by taking the first tag as an emotion tag as an example.
In another embodiment of the method for human-machine conversation provided by the embodiment of the application, the determining the first tag of the user according to the conversation feature includes:
the emotion of the user is analyzed according to the conversation characteristics.
It can be understood that the emotion of the user is analyzed according to the conversation features; for example, when the dialog features include the input speed of the user and the intonation of the user voice input, if the input speed is fast and the intonation is high, the corresponding emotion of the user may be impatient; when the input speed is slow and the intonation is low, the corresponding user emotion may be a low emotion.
And determining the first label according to the emotion analysis result.
It can be understood that the emotion analysis result may include a plurality of cases, and the embodiment of the present application may integrate all the cases to determine the first tag; for example, determining that the emotion of the user is inattentive according to the text content in the conversation feature, determining that the emotion of the user is low according to the tone mood in the conversation feature, and finally determining that the first tag can be inattentive and low; and if the emotion of the user is determined to be discontent according to the text content in the conversation feature, determining the emotion of the user to be impatient according to the tone in the conversation feature, and finally determining the first label to be angry.
In the embodiment of the application, the emotion of the user is analyzed according to the conversation characteristics, and then the first label is determined according to the emotion analysis result, so that the finally determined preset material meets the emotion requirement of the user, and the user experience is improved.
As can be seen from the foregoing description, the dialog features are different, and the manner of determining the first tag is different, and the process of determining the first tag will be described below by taking various dialog features as an example.
In another embodiment of the method for human-machine conversation provided by the embodiment of the application, the conversation feature includes an input speed of a user, and determining the first label of the user according to the conversation feature includes:
if the input speed is lower than the preset first speed, determining that the first label of the user is a preset third label;
and if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
The first speed and the second speed can be adjusted according to actual needs, the third tag and the fourth tag can also comprise various forms, which are not limited in this application, for example, the third tag can be a low emotion or a gag, and the fourth tag can be an impatience or an excitement.
It can be understood that, taking the voice input speed of the user as an example, under a normal condition, the voice input speed of the user corresponds to a speed range, the upper limit of the speed range is the second speed, and the lower limit of the speed range is the first speed, when the voice input speed of the user is lower than the first speed, it is indicated that the voice speed of the user is slow, the first tag may be a third tag that is determined to be preset, and when the voice input speed of the user is higher than the second speed, it is indicated that the voice speed of the user is fast, and the emotion is excited, the first tag may be determined to be a fourth tag that is preset.
In another embodiment of the method for human-machine conversation provided by the embodiment of the application, the conversation feature includes an interval time of the user response, and determining the first label of the user according to the conversation feature includes:
if the interval time is less than the preset first time, determining that the first label of the user is a preset fifth label;
and if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
It is understood that, normally, the interval time of the user response should also be in a time range, the upper limit of the time range may be the second time, the lower limit of the time range may be the first time, when the interval time is less than the first time, the user responds faster, the preset fifth label may be emotional upsurge, fast response or happy, when the interval time is greater than the second time, the user responds slower, and the preset sixth label may be emotional upset, slow response or unhappy.
In another embodiment of the method for human-machine conversation provided by the embodiment of the application, when the conversation feature includes text content input by the user, determining the first tag of the user according to the conversation feature includes:
extracting key words in text content input by a user;
and determining a first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
It is understood that the text content input by the user may be more, but only part of the content can express the state of the user; taking the text content as "i do not worry today" as an example, the "today" and "i" may not reflect the emotional state of the user, and the keyword "do not worry" may reflect the emotional state of the user, so in order to improve the efficiency and accuracy of determining the first tag, the keyword in the text content may be extracted first, and then the first tag may be finally determined according to the corresponding relationship between the keyword and the first tag.
As can be seen from the foregoing description, the first tag and the second tag may be the same or different, and when the first tag is different from the second tag, matching needs to be performed according to a tag matching rule, and a matching process will be described below.
In another embodiment of the method for man-machine interaction provided in the embodiment of the present application, determining, according to a preset tag matching rule, a second tag matching with a first tag in a preset tag set includes:
and determining a preset seventh label which is a synonym of the first label in the preset label set, and taking the preset seventh label as the second label.
For example, when the first label is "bye", in response, the avatar should also make a bye reaction, that is, show the user with the default material of the bye, however, for the default material of the bye, the default second label may be "bailey", so the seventh label "bailey" which is synonymous with the first label "bye" can be first searched from the label set according to the synonym relationship, and then the seventh label "bailey" is used as the second label, so that the default material of the bye corresponding to "bailey" can be finally determined.
The method provided by the embodiment of the application can be applied to a terminal, and the terminal can be a block node device in a block chain, namely the terminal can be a node in the block chain. The nodes in the blockchain will be described in detail below.
Referring to the data sharing system shown in fig. 6, the data sharing system 100 refers to a system for performing data sharing between nodes, the data sharing system may include a plurality of nodes 101, and the plurality of nodes 101 may refer to respective clients in the data sharing system. Each node 101 may receive input information while operating normally and maintain shared data within the data sharing system based on the received input information. In order to ensure information intercommunication in the data sharing system, information connection can exist between each node in the data sharing system, and information transmission can be carried out between the nodes through the information connection. For example, when an arbitrary node in the data sharing system receives input information, other nodes in the data sharing system acquire the input information according to a consensus algorithm, and store the input information as data in shared data, so that the data stored on all the nodes in the data sharing system are consistent.
Each node in the data sharing system has a node identifier corresponding thereto, and each node in the data sharing system may store a node identifier of another node in the data sharing system, so that the generated block is broadcast to the other node in the data sharing system according to the node identifier of the other node in the following. Each node may maintain a node identifier list as shown in the following table, and store the node name and the node identifier in the node identifier list correspondingly. The node identifier may be an S19P1855(Internet Protocol) address or any other information that can be used to identify the node, and only the S19P1855 address is described in table 1 as an example.
Node name Node identification
Node 1 117.114.151.174
Node 2 117.116.189.145
Node N 119.123.789.258
Each node in the data sharing system stores one identical blockchain. The block chain is composed of a plurality of blocks, as shown in fig. 7, the block chain is composed of a plurality of blocks, the starting block includes a block header and a block main body, the block header stores an input information characteristic value, a version number, a timestamp and a difficulty value, and the block main body stores input information; the next block of the starting block takes the starting block as a parent block, the next block also comprises a block head and a block main body, the block head stores the input information characteristic value of the current block, the block head characteristic value of the parent block, the version number, the timestamp and the difficulty value, and the like, so that the block data stored in each block in the block chain is associated with the block data stored in the parent block, and the safety of the input information in the block is ensured.
When each block in the block chain is generated, referring to fig. 8, when the node where the block chain is located receives the input information, the input information is verified, after the verification is completed, the input information is stored in the memory pool, and the hash tree for recording the input information is updated; and then, updating the updating time stamp to the time when the input information is received, trying different random numbers, and calculating the characteristic value for multiple times, so that the calculated characteristic value can meet the following formula:
SHA256(SHA256(version+prev_hash+merkle_root+ntime+nbits+x))<TARGET
wherein, SHA256 is a characteristic value algorithm used for calculating a characteristic value; version is version information of the relevant block protocol in the block chain; prev _ hash is a block head characteristic value of a parent block of the current block; merkle _ root is a characteristic value of the input information; ntime is the update time of the update timestamp; nbits is the current difficulty, is a fixed value within a period of time, and is determined again after exceeding a fixed time period; x is a random number; TARGET is a feature threshold, which can be determined from nbits.
Therefore, when the random number meeting the formula is obtained through calculation, the information can be correspondingly stored, and the block head and the block main body are generated to obtain the current block. And then, the node where the block chain is located respectively sends the newly generated blocks to other nodes in the data sharing system where the newly generated blocks are located according to the node identifications of the other nodes in the data sharing system, the newly generated blocks are verified by the other nodes, and the newly generated blocks are added to the block chain stored in the newly generated blocks after the verification is completed.
Referring to fig. 9, a schematic diagram of an embodiment of a human-machine interaction device according to the present application is shown. As shown in fig. 9, an embodiment of the present application provides an embodiment of an apparatus for human-computer conversation, including:
an obtaining unit 301, configured to obtain a dialog feature of a user in a human-computer dialog process;
a first determining unit 302, configured to determine a first tag of a user according to a dialog feature;
a matching unit 303, configured to determine, according to a preset tag matching rule, a second tag that matches the first tag in a preset tag set;
a second determining unit 304, configured to determine a preset material corresponding to the second tag according to a corresponding relationship between the second tag and the preset material;
the display unit 305 is configured to display a preset material.
In another embodiment of the apparatus for human-machine conversation provided by the embodiment of the present application, the first determining unit 302 is configured to:
analyzing the emotion of the user according to the conversation characteristics;
and determining the first label according to the emotion analysis result.
In another embodiment of the apparatus for human-computer interaction provided by the embodiment of the present application, the interaction characteristic includes an input speed of a user, and the first determining unit 302 is configured to:
if the input speed is lower than the preset first speed, determining that the first label of the user is a preset third label;
and if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In another embodiment of the apparatus for human-computer interaction provided by the embodiment of the present application, the interaction feature includes an interval of the user response, and the first determining unit 302 is configured to:
if the interval time is less than the preset first time, determining that the first label of the user is a preset fifth label;
and if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In another embodiment of the apparatus for human-computer interaction provided by the embodiment of the present application, when the interaction feature includes text content input by a user, the first determining unit 302 is configured to:
extracting key words in text content input by a user;
and determining a first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In another embodiment of the apparatus for human-computer interaction provided in the embodiment of the present application, the matching unit 303 is configured to:
and determining a preset seventh label which is a synonym of the first label in the preset label set, and taking the preset seventh label as the second label.
Next, an embodiment of the present application further provides a terminal device, as shown in fig. 10, for convenience of description, only a portion related to the embodiment of the present invention is shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiment of the present invention. The attribute information display device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the attribute information display device as the mobile phone as an example:
fig. 10 is a block diagram showing a partial structure of a cellular phone related to the attribute information presentation apparatus provided by the embodiment of the present invention. Referring to fig. 10, the cellular phone includes: radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WiFi) module 870, processor 880, and power supply 890. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the mobile phone with reference to fig. 10:
the RF circuit 810 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to the processor 880; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 810 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA) WCDMA 19P1856le Access, Long Term Evolution (LTE), email, Short Messaging Service (SMS), etc.
The memory 820 may be used to store software programs and modules, and the processor 880 executes various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 830 may include a touch panel 831 and other input devices 88. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 or near the touch panel 831 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 830 may include other input devices 88 in addition to the touch panel 831. In particular, other input devices 88 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 840 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 840 may include a display panel 841, and the display panel 841 may be alternatively configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, touch panel 831 can overlay display panel 841, and when touch panel 831 detects a touch operation thereon or nearby, communicate to processor 880 to determine the type of touch event, and processor 880 can then provide a corresponding visual output on display panel 841 in accordance with the type of touch event. Although in fig. 10, the touch panel 831 and the display panel 841 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 850, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 841 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 can transmit the electrical signal converted from the received audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 and output; on the other hand, the microphone 862 converts collected sound signals into electrical signals, which are received by the audio circuit 860 and converted into audio data, which are then processed by the audio data output processor 880 and transmitted to, for example, another cellular phone via the RF circuit 810, or output to the memory 820 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 870, and provides wireless broadband Internet access for the user. Although fig. 10 shows WiFi module 870, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby integrally monitoring the mobile phone. Alternatively, processor 880 may include one or more processing units; alternatively, the processor 880 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The phone also includes a power supply 890 (e.g., a battery) for powering the various components, optionally logically connected to the processor 880 via a power management system, so as to manage charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera module, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present invention, the processor 880 included in the terminal device further has the function of the apparatus for human-computer conversation in the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, in which instructions are stored, and when the computer-readable storage medium runs on a computer, the computer is enabled to implement the functions of the device for man-machine interaction in the foregoing embodiments.
Embodiments of the present application further provide a computer program product, where the computer program product includes computer software instructions, and the computer software instructions may implement, through a processor, the functions of the apparatus for human-computer interaction in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of human-machine interaction, comprising:
acquiring conversation characteristics of a user in a man-machine conversation process;
determining a first label of a user according to the conversation feature;
determining a second label matched with the first label in a preset label set according to a preset label matching rule;
determining a preset material corresponding to a second label according to the corresponding relation between the second label and the preset material;
and displaying the preset material through a preset virtual image.
2. The method of claim 1, wherein determining the first label of the user according to the dialog feature comprises:
analyzing the emotion of the user according to the conversation characteristics;
and determining the first label according to the emotion analysis result.
3. The method of claim 1, wherein the dialog feature comprises an input speed of the user, and wherein determining the first label of the user based on the dialog feature comprises:
if the input speed is lower than a preset first speed, determining that the first label of the user is a preset third label;
and if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
4. The method of claim 1, wherein the dialog feature comprises an interval of user responses, and wherein determining the first label of the user based on the dialog feature comprises:
if the interval time is less than the preset first time, determining that the first label of the user is a preset fifth label;
and if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
5. The method of claim 1, wherein when the dialog feature includes text content input by the user, the determining the first label of the user according to the dialog feature comprises:
extracting key words in the text content input by the user;
and determining a first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
6. The method of claim 1, wherein determining a second label in the preset label set that matches the first label according to a preset label matching rule comprises:
and determining a preset seventh label which is a synonym with the first label in a preset label set, and taking the preset seventh label as a second label.
7. The method of claim 1, wherein the method is applied to a terminal, and the terminal is a block node device in a block chain.
8. A device for human-computer interaction, comprising:
the acquisition unit is used for acquiring conversation characteristics of a user in a man-machine conversation process;
the first determining unit is used for determining a first label of the user according to the conversation characteristics;
the matching unit is used for determining a second label matched with the first label in a preset label set according to a preset label matching rule;
the second determining unit is used for determining the preset material corresponding to the second label according to the corresponding relation between the second label and the preset material;
and the display unit is used for displaying the preset materials through preset virtual images.
9. A terminal device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is configured to execute a program in the memory to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 7.
CN201910880191.8A 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium Active CN110597973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910880191.8A CN110597973B (en) 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910880191.8A CN110597973B (en) 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110597973A true CN110597973A (en) 2019-12-20
CN110597973B CN110597973B (en) 2024-06-07

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857344A (en) * 2020-07-22 2020-10-30 杭州网易云音乐科技有限公司 Information processing method, system, medium, and computing device
CN114721516A (en) * 2022-03-29 2022-07-08 网易有道信息技术(北京)有限公司 Multi-object interaction method based on virtual space and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN110023926A (en) * 2016-08-30 2019-07-16 谷歌有限责任公司 The reply content to be presented is generated using text input and user state information to input with response text
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN110209897A (en) * 2018-02-12 2019-09-06 腾讯科技(深圳)有限公司 Intelligent dialogue method, apparatus, storage medium and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN110023926A (en) * 2016-08-30 2019-07-16 谷歌有限责任公司 The reply content to be presented is generated using text input and user state information to input with response text
CN110209897A (en) * 2018-02-12 2019-09-06 腾讯科技(深圳)有限公司 Intelligent dialogue method, apparatus, storage medium and equipment
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857344A (en) * 2020-07-22 2020-10-30 杭州网易云音乐科技有限公司 Information processing method, system, medium, and computing device
CN114721516A (en) * 2022-03-29 2022-07-08 网易有道信息技术(北京)有限公司 Multi-object interaction method based on virtual space and related equipment

Similar Documents

Publication Publication Date Title
CN109379641B (en) Subtitle generating method and device
CN108021572B (en) Reply information recommendation method and device
CN111282268B (en) Plot showing method, plot showing device, plot showing terminal and storage medium in virtual environment
US11274932B2 (en) Navigation method, navigation device, and storage medium
CN108958606B (en) Split screen display method and device, storage medium and electronic equipment
EP3299999A2 (en) Method and device for updating sequence of fingerprint templates for matching
CN105630846B (en) Head portrait updating method and device
CN110673770B (en) Message display method and terminal equipment
CN109993821B (en) Expression playing method and mobile terminal
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
CN109656510B (en) Method and terminal for voice input in webpage
CN108093130B (en) Method for searching contact person and mobile terminal
CN110941750A (en) Data linkage method and related device
CN107103074B (en) Processing method of shared information and mobile terminal
CN111723855A (en) Learning knowledge point display method, terminal equipment and storage medium
CN109634438B (en) Input method control method and terminal equipment
CN108549681B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN110399474B (en) Intelligent dialogue method, device, equipment and storage medium
CN110780751B (en) Information processing method and electronic equipment
CN110750198A (en) Expression sending method and mobile terminal
CN108959585B (en) Expression picture obtaining method and terminal equipment
CN112764543A (en) Information output method, terminal equipment and computer readable storage medium
CN110277097B (en) Data processing method and related equipment
CN111666498A (en) Friend recommendation method based on interactive information, related device and storage medium
CN111611369A (en) Interactive method based on artificial intelligence and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant