WO2019153860A1 - 信息交互方法和装置、存储介质及电子装置 - Google Patents

信息交互方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2019153860A1
WO2019153860A1 PCT/CN2018/119356 CN2018119356W WO2019153860A1 WO 2019153860 A1 WO2019153860 A1 WO 2019153860A1 CN 2018119356 W CN2018119356 W CN 2018119356W WO 2019153860 A1 WO2019153860 A1 WO 2019153860A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
virtual object
terminal
virtual
information
Prior art date
Application number
PCT/CN2018/119356
Other languages
English (en)
French (fr)
Inventor
仇蒙
潘佳绮
张雅
张书婷
肖庆华
汪俊明
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP18905825.8A priority Critical patent/EP3751395A4/en
Publication of WO2019153860A1 publication Critical patent/WO2019153860A1/zh
Priority to US16/884,877 priority patent/US11353950B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/847Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/32Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems

Definitions

  • the present application relates to the field of computers, and in particular, to an information interaction method and apparatus, a storage medium, and an electronic device.
  • an input plug-in is usually set in an operation interface displayed by the application client, and the information input by the user is obtained through the input plug-in, and then the above information is sent to the desired interaction.
  • the target object to complete the information interaction.
  • the embodiments of the present application provide an information interaction method and device, a storage medium, and an electronic device, so as to at least solve the technical problem that the interaction operation method of the related information interaction method is high.
  • an information interaction method including: a terminal extracting a biometric of a target object, wherein the target object controls the first virtual object to perform a virtual task by using the first client; and the terminal extracts according to the The biometric identifies the current first emotion of the target object; the terminal determines the first interaction information to be interacted with the first emotion; the terminal sends the first interaction information to the second client where the second virtual object is located, where the second The virtual object performs a virtual task together with the first virtual object.
  • an information interaction apparatus is further provided, which is applied to a terminal, and includes: an extracting unit configured to extract a biometric of a target object, where the target object is controlled by the first client. a virtual object performs a virtual task; the identifying unit is configured to identify the current first mood of the target object according to the extracted biometric; the determining unit is configured to determine a first interaction to be matched that matches the first emotion
  • the sending unit is configured to send the first interaction information to the second client where the second virtual object is located, wherein the second virtual object and the first virtual object jointly perform the virtual task.
  • a storage medium having stored therein a computer program, wherein the computer program is configured to execute the above-described information interaction method at runtime.
  • the terminal extracts the biometric feature of the target object; and identifies the current first emotion of the target object according to the extracted biometric feature; and the terminal determines the interaction to be matched with the first emotion.
  • the first interaction information is sent by the terminal to the second client where the second virtual object is located, so that the first interaction message to be exchanged can be obtained according to the biometric feature of the target object, and the first interaction is performed.
  • the message is sent to the second client, which avoids the problem that the application task executed by the control object controlled by the application client must be interrupted to complete the information interaction with the target object, so that the control object can complete the process of executing the application character.
  • the interaction of information realizes the technical effect of reducing the complexity of the interaction operation, and further solves the technical problem that the interaction of the related information interaction method has high complexity.
  • FIG. 1 is a schematic diagram of an application environment of an optional information interaction method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an optional information interaction method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an optional information interaction method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another optional information interaction method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of still another optional information interaction method according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an optional information interaction apparatus according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • an information interaction method is provided.
  • the information interaction method may be applied to an environment as shown in FIG. 1 .
  • the terminal 102 recognizes a facial feature of a person through an identification device carried on the terminal for identifying a biometric feature of the user, or collects a sound feature of the user through a sound collecting device. And identifying, according to the collected biometrics, the first emotion of the target object, determining the first interaction information with the interaction matching the first emotion, and transmitting the first interaction information to the second virtual object by using the network 104 After the second terminal 106 receives the first interaction information, the second terminal 106 displays the first interaction information on the second client.
  • the first client is located at the first terminal 102 and the second client is located at the second terminal 106.
  • the foregoing first client and the second client may include, but are not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, and other mobile hardware devices that can extract biometric features of the target object.
  • the above network may include, but is not limited to, a wireless network, wherein the wireless network includes: Bluetooth, WIFI, and other networks that implement wireless communication. The above is only an example, and the embodiment does not limit this.
  • the foregoing information interaction method may include:
  • the terminal extracts a biometric feature of the target object, where the target object controls the first virtual object to perform a virtual task by using the first client.
  • the terminal identifies, according to the extracted biometrics, a current first emotion of the target object.
  • the terminal determines first interaction information to be exchanged that matches the first emotion.
  • the terminal sends the first interaction information to the second client where the second virtual object is located, where the second virtual object performs the virtual task together with the first virtual object.
  • the above information interaction method may be, but is not limited to, applied to the game field or the simulation training field.
  • the first client may be a terminal used by one user
  • the second client may be a terminal used by another user
  • the first virtual object may be a virtual object controlled by the first client
  • the second virtual object may be a virtual object controlled by the second client.
  • the terminal used by the user extracts the biometric of the user
  • the current first emotion of the user such as anger, nervousness, excitement, and the like
  • the terminal determines first interaction information that matches the current first emotion, and transmits the first interaction information to the second client used by another user.
  • the biometrics of the target object are extracted by the terminal; the terminal identifies the current first emotion of the target object according to the extracted biometrics; the terminal determines the first interaction information to be interacted with the first emotion;
  • Interrupting the application task executed by the control object controlled by the application client can complete the problem of information interaction with the target object, thereby completing the information interaction in the process of controlling the execution of the application character, thereby reducing the complexity of the interaction operation.
  • the technical effect solves the problem of high complexity of interaction in the related art.
  • the facial image of the target object is collected by the image acquiring device of the terminal where the first client is located, and the facial feature of the target object is extracted from the facial image; and the terminal searches for the emotional identifier corresponding to the facial feature according to the extracted facial feature.
  • the emotion represented by the emotion sign is taken as the first emotion.
  • the above biometrics may be facial expressions of the user or sound information.
  • the user collects a facial image through the collecting device, analyzes the collected facial image, and extracts facial features of the user, such as eyebrows, eyes, and mouth, according to the characteristics of each facial feature, correspondingly obtaining the user's first mood.
  • the sound signal of the target object may be collected by the sound collection device of the terminal where the first client is located, and the sound feature of the target object is extracted from the sound signal.
  • the terminal compares the extracted sound feature with the pre-configured target audio feature, and obtains the emotion identifier corresponding to the target audio feature, and the emotion represented by the emotion identifier, if the sound feature and the target audio feature similarity are higher than a predetermined threshold. As the first emotion.
  • the above sound signal can be a sound emitted by the user.
  • the sound collecting device collects the user's voice, compares the collected sound with the target audio feature, obtains an emotional identifier corresponding to the target audio feature, and obtains the first emotion.
  • Table 1 after receiving the sound signals of the target object collected by the sound collection device, such as "Brothers rush!, the sound characteristics in the received sound signal are "punched" and compared with the target audio features.
  • the similarity between the obtained sound feature and the target audio feature is 80%. If the similarity exceeds the predetermined threshold by 60%, the corresponding emotion identifier is obtained according to the target audio feature, and the emotion identifier is excited, indicating that the user is currently very excited.
  • the above-mentioned target audio feature may be any sound signal collected by the sound collecting device, and the sound feature may be acquired by any algorithm.
  • the target audio feature may be acquired by an advanced setting method, and the above-mentioned emotion identifier may be other words.
  • the target audio feature may also be a feature such as a timbre, a pitch, and a sound intensity of the sound. After the voice information of the user is obtained, the obtained sound information is compared with the timbre, pitch, and sound intensity of the target audio feature, thereby obtaining a corresponding emotion identifier.
  • the terminal where the first client is located collects the facial image and sound information of the user through the collection device carried on the terminal.
  • the terminal analyzes the collected facial images to obtain facial features, and analyzes the sound information to obtain sound features.
  • the corresponding emotion identifier is obtained according to the facial feature and the sound feature, thereby obtaining the first emotion of the user.
  • the first interaction information is correspondingly obtained, and the first interaction information is displayed on the second client.
  • the results are shown in Figure 5.
  • the first interaction information when the first interaction information is displayed on the second client, the first interaction information may or may not be displayed on the first client.
  • FIG. 6 is an example of the first client displaying the first interaction information.
  • the terminal may determine, but is not limited to, determining a virtual object that is the same camp as the first virtual object as the second virtual object, and determining the virtual object that is different from the first virtual object as the third virtual object.
  • the second virtual object may be one or more virtual objects belonging to the same camp as the first virtual object
  • the third virtual object may be one or more virtual objects belonging to different camps of the first virtual object.
  • the second virtual object and the first virtual object may be a teammate relationship, and the third virtual object and the first virtual object may be different teams or the like.
  • the second virtual object or the third virtual object may be determined using the following method:
  • the terminal divides the virtual object into a second virtual object or a third virtual object according to the identity information of the virtual object;
  • the terminal divides the virtual object into a second virtual object or a third virtual object according to the task target of the virtual object;
  • the terminal divides the virtual object into a second virtual object or a third virtual object according to the location of the virtual object.
  • the game field is continued as an example, and the identity information may be the gender, nationality, and the like of the virtual object.
  • the terminal sets a virtual object having the same nationality as the first virtual object as the second virtual object, and sets a virtual object different from the first virtual object nationality as the third virtual object or the like.
  • the above location may be the birth location of the virtual object. For example, taking the birth position as an example, setting a birth area of a different virtual object in advance, setting a virtual object identical to the first virtual object birth area as a second virtual object, and setting a virtual object different from the first virtual object birth area. Is the third virtual object.
  • the task target of the above virtual object may be a winning condition of the virtual object.
  • the virtual object that is the same as the winning condition of the first virtual object is divided into the second virtual object, and the virtual object that is different from the first virtual object winning condition is divided into the third virtual object.
  • the terminal may use, as the second virtual object, all the virtual objects that belong to the same camp as the first virtual object, and send the first interaction information to the second client where the second virtual object is located, or Part of the virtual object belonging to the same camp as the second virtual object, and sending the second virtual object to the second client where the second virtual object is located; and the third virtual object belonging to a different camp from the first virtual object
  • the third client sends the second interaction information.
  • the first interaction information is matched with the first emotion
  • the second interaction information is matched with the second emotion
  • the first emotion is different from the second emotion.
  • the sending range of the first interaction message may be configured on the first client, where the first interaction message may be a full-person message or a friend message.
  • the first client can send a full message, or can send a friend message like a configured fixed friend.
  • Sending a full-person message can send a message to all other users.
  • Sending a friend message can form a group for multiple friends. When sending a friend message, send a friend message to a friend in a group at once, or send a friend to a fixed friend. Message.
  • the second client displays a full-sent message sent by the first client, and the full-person message is available to all users. See the news.
  • FIG. 9 is that when the user sends a friend message, the second client can see the friend message sent by the user, but the friend message is not visible to all users, and only the friend configured by the first client can see.
  • the full-person message and the friend message can be distinguished by setting the full-person message and the friend message to different colors or with different flags.
  • the buddy message in FIG. 10 is underlined, and thus is separated from the full message.
  • the third client where the third virtual object is located receives a message different from the second client.
  • FIG. 5 is the first interaction information received by the second client, and the first interaction information received by the third client in FIG. 11 is visible, because the third virtual terminal of the third client The object is different from the first virtual object of the first client, so the message displayed by the third client is different from the message displayed by the second client.
  • the first interaction information that is matched by the terminal to the emotion identifier of the first emotion includes: when the emotion identifier is indicated as the first emotion type, the terminal acquires first interaction information that matches the first emotion type, The first interaction information that matches the first emotion type is used to request assistance for the first virtual object; and in the case that the emotion identifier is indicated as the second emotion type, the terminal acquires the first that matches the second emotion type.
  • the interaction information, wherein the first interaction information matching the second emotion type is used to encourage the second virtual object; and in the case that the emotion identifier is indicated as the third emotion type, the terminal acquisition matches the third emotion type
  • the first interaction information, wherein the first interaction information that matches the third emotion type is used to issue an inquiry request to the second virtual object.
  • the biometric feature of the target object is extracted by the terminal; the terminal identifies the current first emotion of the target object according to the extracted biometric; the terminal determines the first interaction information to be interacted with the first emotion; A method for the second client to send the first interaction information, so that the first interaction message to be exchanged is obtained according to the biometric feature of the target object, and the first interaction message is sent to the second client, thereby avoiding
  • the application task executed by the control object controlled by the application client must be interrupted to complete the information interaction with the target object, thereby achieving the technical effect of reducing the complexity of the interaction operation, and solving the high complexity of the interaction operation in the related art.
  • the sending, by the terminal, the first interaction information to the second client where the second virtual object is located includes:
  • the terminal determines a second virtual object from the virtual task, where the second virtual object and the first virtual object are virtual objects of the same camp;
  • the terminal sends the first interaction information to the second client where the second virtual object is located.
  • the first interaction information may be text information, image information, or audio information.
  • the first interaction information is used as text information, and is described in conjunction with FIG. 5.
  • the client shown in FIG. 5 is a second client, and the virtual object in the second client is a second virtual object.
  • the first virtual object on the first client is a teammate relationship with the second virtual object, and the message sent by the first client is displayed in the upper left corner of the second client.
  • the second client can know the status of the first virtual object of the first client.
  • the virtual object that is the same camp as the first virtual object is determined as the second virtual object by the terminal, and the first interaction message is sent to the second client where the second virtual object is located, so that only the same camp is sent.
  • the second virtual object sends the first interaction message, thereby improving the transmission flexibility of the first interaction message.
  • the determining, by the terminal, the second virtual object from the virtual task includes:
  • the terminal acquires all virtual objects from the virtual objects of the same camp as the second virtual object;
  • the terminal acquires a part of the virtual object from the virtual object of the same camp as the second virtual object, wherein the partial virtual object has an association relationship with the first virtual object.
  • the first client may send a full-time message, or may send a friend message like a configured fixed friend.
  • Sending a full-person message can send a message to all other users.
  • Sending a friend message can form a group for multiple friends.
  • send a friend message send a friend message to a friend in a group at once, or send a friend to a fixed friend. Message.
  • all the virtual characters belonging to the same camp as the first virtual character are used as the second virtual character, or a part of the virtual characters belonging to the same camp as the first virtual character is used as the second virtual role, so that the flexible decision can be flexibly determined.
  • the second virtual role makes information interaction more flexible.
  • the terminal when the terminal sends the first interaction information to the second client where the second virtual object is located, the terminal further includes:
  • the terminal determines a third virtual object from the virtual task, where the third virtual object and the first virtual object are virtual objects of different camps;
  • the terminal sends the second interaction information to the third client where the third virtual object is located, where the second interaction information matches the second emotion, and the second emotion and the first emotion are different emotions.
  • FIG. 5 is the first interaction information received by the second client, and the first interaction information received by the third client in FIG. 11 is visible, because the third virtual terminal of the third client The object is different from the first virtual object of the first client, so the message displayed by the third client is different from the message displayed by the second client.
  • the third virtual object is determined by the terminal, and the second interaction message is sent to the third virtual object, thereby improving the flexibility of information interaction and further reducing the complexity of information interaction.
  • the terminal extracts the biometric feature of the target object, the terminal collects a facial image of the target object by using an image acquiring device in the terminal where the first client is located, and the terminal extracts the facial feature of the target object from the facial image.
  • the terminal identifies, according to the extracted biometric, the current first emotion of the target object, the terminal: the terminal identifies the first emotion of the target object according to the extracted facial features.
  • the first emotion that the terminal identifies the target object according to the extracted facial features includes:
  • the terminal searches for an emotion identifier that matches the extracted facial features.
  • the terminal uses the emotion represented by the found emotion identifier as the first emotion.
  • the image capturing device may be a camera on the mobile terminal.
  • the facial features described above may be features of facial organs such as eyebrows, forehead, eyes, and face.
  • the above biometrics may be facial expressions of the user or sound information.
  • the user collects a facial image through the collecting device, analyzes the collected facial image, and extracts facial features of the user, such as eyebrows, eyes, and mouth, according to the characteristics of each facial feature, correspondingly obtaining the user's first mood.
  • the face image is cut off from the facial image according to the face detection algorithm, and according to the facial feature extraction and the expression classification method, the ratio of the cut face graphic is also Not the same. If the face picture is a dynamic picture, the face features need to be tracked.
  • the cut face image is subjected to collective processing or gradation processing, and then facial features are extracted to recognize the expression.
  • the terminal extracts the facial feature according to the facial image of the target object, and acquires the first emotion according to the facial feature, so that the first emotion of the target object can be directly obtained according to the facial feature, thereby reducing the complexity of the information interaction.
  • the terminal extracts the biometric feature of the target object, where the terminal collects the sound signal of the target object by using the sound collection device in the terminal where the first client is located; and the terminal extracts the sound feature of the target object from the sound signal;
  • the terminal identifies, according to the extracted biometric, the current first emotion of the target object, that the terminal identifies the first emotion of the target object according to the extracted sound feature.
  • the first emotion that the terminal identifies the target object according to the extracted sound feature includes:
  • the terminal acquires a pre-configured target audio feature, where the target audio feature is used to trigger the first interaction information.
  • the terminal uses the emotion represented by the emotion identifier as the first emotion.
  • the above sound signal can be a sound emitted by the user.
  • the sound collecting device collects the user's voice, compares the collected sound with the target audio feature, obtains an emotional identifier corresponding to the target audio feature, and obtains the first emotion.
  • the sound characteristics in the received sound signal are "punched” and compared with the target audio features.
  • the similarity between the obtained sound feature and the target audio feature is 80%. If the similarity exceeds the predetermined threshold by 60%, the corresponding emotion identifier is obtained according to the target audio feature, and the emotion identifier is excited, indicating that the user is currently very excited.
  • the above-mentioned target audio feature may be any sound signal collected by the sound collecting device, and the sound feature may be acquired by any algorithm.
  • the target audio feature may be acquired by an advanced setting method, and the above-mentioned emotion identifier may be other words.
  • the target audio feature may also be a feature such as a timbre, a pitch, and a sound intensity of the sound.
  • the voice information of the user is obtained, the obtained sound information is compared with the timbre, pitch, and sound intensity of the target audio feature, thereby obtaining a corresponding emotion identifier.
  • the input voice is recognized by at least two voice recognition branches. When the two speech recognition results recognized by the two speech recognition branches are identical, the recognized result can be output. When the two speech recognition results recognized by the two speech recognition branches are inconsistent, the user is prompted to re-enter the speech signal.
  • the terminal may further process the at least two speech recognition results according to a minority obeying majority principle or a weighting algorithm or a combination of the two to obtain a speech recognition. As a result, the speech recognition result is output.
  • the above speech recognition branch may be implemented using a statistically based implicit Markov model recognition or training algorithm or a combination of both.
  • the terminal by configuring the target audio feature in advance, if the similarity between the target audio feature and the sound feature is higher than a predetermined threshold, the terminal acquires the emotion identifier corresponding to the target audio feature, and uses the emotion identified by the emotion identifier as the first An emotion, so that the corresponding first emotion can be obtained according to the voice information, thereby reducing the complexity of the information interaction.
  • the terminal determines, by the terminal, the first interaction information to be exchanged that matches the current first emotion of the target object, including:
  • the terminal acquires an emotion identifier of the first emotion.
  • the terminal searches for first interaction information that matches the emotion identifier of the first emotion.
  • the corresponding relationship between the emotion identifier of the first emotion and the first interaction information may be preset, and the corresponding first interaction is searched from the correspondence between the preset emotion identifier and the first interaction information according to the acquired emotion identifier. Information, thereby obtaining the first interaction information and transmitting the first interaction information.
  • the first interaction information is searched according to the correspondence between the emotion identifier and the first interaction information, so that the first interaction information can be sent, and the efficiency of information interaction is improved.
  • the terminal searching for the first interaction information that matches the emotion identifier of the first emotion includes:
  • the terminal acquires first interaction information that matches the first emotion type, wherein the first interaction information that matches the first emotion type is used to request the a virtual object to help;
  • the terminal acquires first interaction information that matches the second emotion type, wherein the first interaction information that matches the second emotion type is used for the second The virtual object is encouraged to prompt;
  • the terminal acquires first interaction information that matches the third emotion type, wherein the first interaction information that matches the third emotion type is used to the second
  • the virtual object issues an inquiry request.
  • the first type of emotion described above may be tension, excitement, doubt, and the like.
  • the first interaction information may be text information, such as "save me”, “refuel, we can do it!, "are you sure?".
  • the first interaction information that matches the first emotion information may be “Save Me”; when the emotion identifier indicates that the first emotion type is tense, the first interaction information that matches the first interaction information may be To "save me”; when the emotion sign indicates the first emotion type such as excitement, the first interaction information that matches it may be "refuel, we can do it!; when the emotion flag indicates the first emotion type such as question The first interactive information that matches it can be "Are you sure?" to indicate the question.
  • the content of the first interaction information is determined by the terminal according to the type of the emotion identifier, thereby further reducing the complexity of information interaction and improving the flexibility of information interaction.
  • the terminal determines that the first interaction information to be interacted with the current first emotion of the target object includes at least one of the following:
  • the terminal determines text information that matches the first emotion
  • the terminal determines audio information that matches the first emotion.
  • FIG. 12 and FIG. 13 are second clients.
  • the message sent by the first client to the second client may be a voice message or an image message.
  • a voice message is shown in FIG. 13 as an image message.
  • FIG. 12 and FIG. 13 are only examples, and do not constitute a limitation on the present application.
  • the terminal sets different types for the first interaction information, thereby improving the flexibility of information interaction and further reducing the complexity of information interaction.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
  • the instructions include a number of instructions for causing a terminal device (which may be a cell phone, computer, server, or network device, etc.) to perform the methods of various embodiments of the present application.
  • an information interaction apparatus for implementing the above information interaction method.
  • the information interaction apparatus may include:
  • an extracting unit 1402 configured to extract a biometric of the target object, wherein the target object controls the first virtual object to execute the virtual task by using the first client;
  • an identifying unit 1404 configured to identify a current first emotion of the target object according to the extracted biometrics
  • determining unit 1406, configured to determine first interaction information to be interacted with the first emotion
  • the sending unit 1408 is configured to send the first interaction information to the second client where the second virtual object is located, where the second virtual object performs the virtual task together with the first virtual object.
  • the above information interaction device may be, but is not limited to, applied to the field of games or the field of simulation training.
  • the first client may be a game device used by one user
  • the second client may be a game device used by another user
  • the first virtual object may be a virtual object controlled by the first client
  • the second virtual object may be a virtual object controlled by the second client.
  • the biometrics of the target object are extracted; the current first emotion of the target object is identified according to the extracted biometrics; the first interaction information to be interacted with the first emotion is determined; and the second virtual object is located.
  • the method of the first client sending the first interaction information, so that the first interaction message to be exchanged is obtained according to the biometric feature of the target object, and the first interaction message is sent to the second client, thereby avoiding having to interrupt the application client.
  • the application task executed by the control object of the end control can complete the problem of information interaction with the target object, so that the information interaction can be completed in the process of executing the application object by the control object, thereby realizing the technical effect of reducing the complexity of the interaction operation.
  • the problem of high complexity of interaction in the related art is solved.
  • the foregoing first interaction information may be, but is not limited to, one or more of text information, image information, and audio information.
  • the facial image of the target object is collected by the image collecting device of the terminal where the first client is located, the facial feature of the target object is extracted from the facial image, and the emotional identifier corresponding to the facial feature is searched according to the extracted facial feature, and The emotion represented by the emotion sign is used as the first emotion.
  • the above biometrics may be facial expressions of the user or sound information.
  • the user collects a facial image through the collecting device, analyzes the collected facial image, and extracts facial features of the user, such as eyebrows, eyes, and mouth, according to the characteristics of each facial feature, correspondingly obtaining the user's first mood.
  • the sound signal of the target object may be collected by the sound collection device of the terminal where the first client is located, and the sound feature of the target object is extracted from the sound signal. Comparing the extracted sound feature with the pre-configured target audio feature, if the sound feature and the target audio feature similarity are higher than a predetermined threshold, acquiring the emotional identifier corresponding to the target audio feature, and using the emotion represented by the emotional identifier as First emotion.
  • the above sound signal can be a sound emitted by the user.
  • the sound collecting device collects the user's voice, compares the collected sound with the target audio feature, obtains an emotional identifier corresponding to the target audio feature, and obtains the first emotion.
  • the sound characteristics in the received sound signal are "punched” and compared with the target audio features.
  • the similarity between the obtained sound feature and the target audio feature is 80%. If the similarity exceeds the predetermined threshold by 60%, the corresponding emotion identifier is obtained according to the target audio feature, and the emotion identifier is excited, indicating that the user is currently very excited.
  • the above-mentioned target audio feature may be any sound signal collected by the sound collecting device, and the sound feature may be acquired by any algorithm.
  • the target audio feature may be acquired by an advanced setting method, and the above-mentioned emotional identifier may be other words.
  • the target audio feature may also be a feature such as a timbre, a pitch, and a sound intensity of the sound. After the voice information of the user is obtained, the obtained sound information is compared with the timbre, pitch, and sound intensity of the target audio feature, thereby obtaining a corresponding emotion identifier.
  • the terminal where the first client is located collects the facial image and sound information of the user through the collection device carried on the terminal.
  • the collected facial images are analyzed to obtain facial features, and the sound information is analyzed to obtain sound features.
  • the corresponding emotion identifier is obtained according to the facial feature and the sound feature, thereby obtaining the first emotion of the user.
  • the first interaction information is correspondingly obtained, and the first interaction information is displayed on the second client.
  • the results are shown in Figure 5.
  • the first interaction information when the first interaction information is displayed on the second client, the first interaction information may or may not be displayed on the first client.
  • FIG. 6 is an example of the first client displaying the first interaction information.
  • the virtual object that is the same camp as the first virtual object may be determined as the second virtual object, and the virtual object that is different from the first virtual object is determined as the third virtual object.
  • the second virtual object may be one or more virtual objects belonging to the same camp as the first virtual object
  • the third virtual object may be one or more virtual objects belonging to different camps of the first virtual object.
  • the second virtual object and the first virtual object may be a teammate relationship, and the third virtual object and the first virtual object may be different teams or the like.
  • the second virtual object or the third virtual object may be determined using the following method:
  • the virtual object is divided into a second virtual object or a third virtual object according to the location of the virtual object.
  • the game field is continued as an example, and the identity information may be the gender, nationality, and the like of the virtual object.
  • a virtual object having the same nationality as the first virtual object is set as the second virtual object, and a virtual object different from the first virtual object nationality is set as the third virtual object or the like.
  • the above location may be the birth location of the virtual object. For example, taking the birth position as an example, setting a birth area of a different virtual object in advance, setting a virtual object identical to the first virtual object birth area as a second virtual object, and setting a virtual object different from the first virtual object birth area. Is the third virtual object.
  • the task target of the above virtual object may be a winning condition of the virtual object.
  • the virtual object that is the same as the winning condition of the first virtual object is divided into the second virtual object, and the virtual object that is different from the first virtual object winning condition is divided into the third virtual object.
  • all virtual objects that belong to the same camp as the first virtual object may be used as the second virtual object, and the first interaction information may be sent to the second client where the second virtual object is located, or the first virtual object Part of the virtual object belonging to the same camp as the second virtual object, and sending the second virtual object to the second client where the second virtual object is located; and the third virtual object belonging to the third virtual object belonging to a different camp from the first virtual object
  • the client sends the second interaction information.
  • the first interaction information is matched with the first emotion
  • the second interaction information is matched with the second emotion, and the first emotion is different from the second emotion.
  • the sending range of the first interaction message may be configured on the first client, where the first interaction message may be a full-person message or a friend message.
  • the first client can send a full message, or can send a friend message like a configured fixed friend.
  • Sending a full-person message can send a message to all other users.
  • Sending a friend message can form a group for multiple friends. When sending a friend message, send a friend message to a friend in a group at once, or send a friend to a fixed friend. Message.
  • the second client displays a full-sent message sent by the first client, and the full-person message is available to all users. See the news.
  • FIG. 9 shows that when the user sends a friend message, the second client can see the friend message sent by the user, but the friend message is not visible to all users, and only the friend configured by the first client can see.
  • the full-person message and the friend message can be distinguished by setting the full-person message and the friend message to different colors or with different flags.
  • the buddy message in FIG. 10 is underlined, and thus is separated from the full message.
  • the third client where the third virtual object is located receives a message different from the second client.
  • FIG. 5 is the first interaction information received by the second client, and the first interaction information received by the third client in FIG. 11 is visible, because the third virtual terminal of the third client The object is different from the first virtual object of the first client, so the message displayed by the third client is different from the message displayed by the second client.
  • the finding, by the first interaction information, the first interaction information that matches the first emotion type, the first interaction information that matches the first emotion type is obtained, where The first interaction information that matches the first emotion type is used to request assistance for the first virtual object; and in the case that the emotion identifier is indicated as the second emotion type, the first interaction information that matches the second emotion type is acquired, The first interaction information that matches the second emotion type is used to encourage the second virtual object; and when the emotion identifier is indicated as the third emotion type, the first interaction that matches the third emotion type is acquired.
  • Information, wherein the first interaction information that matches the third emotion type is used to issue an inquiry request to the second virtual object.
  • the present embodiment by extracting the biometric of the target object; identifying the current first emotion of the target object according to the extracted biometric; determining the first interaction information to be interacted with the first emotion; A method for the second client to send the first interaction information, so that the first interaction message to be exchanged is obtained according to the biometric feature of the target object, and the first interaction message is sent to the second client, thereby avoiding having to interrupt the
  • the application task executed by the control object controlled by the client can complete the problem of information interaction with the target object, thereby achieving the technical effect of reducing the complexity of the interaction operation, and solving the technical problem of high complexity of interaction in the related art.
  • the sending unit includes:
  • a first determining module configured to determine a second virtual object from the virtual task, wherein the second virtual object and the first virtual object are virtual objects of the same camp;
  • the first sending module is configured to send the first interaction information to the second client where the second virtual object is located.
  • the first interaction information may be text information, image information, or audio information.
  • the first interaction information is used as text information, and is described in conjunction with FIG. 5.
  • the client shown in FIG. 5 is a second client, and the virtual object in the second client is a second virtual object.
  • the first virtual object on the first client is a teammate relationship with the second virtual object, and the message sent by the first client is displayed in the upper left corner of the second client.
  • the second client can know the status of the first virtual object of the first client.
  • the virtual object that is the same camp as the first virtual object is determined as the second virtual object, and the first interactive message is sent to the second client where the second virtual object is located, so that only the first camp is The second virtual object sends the first interaction message, thereby improving the transmission flexibility of the first interaction message.
  • the first determining module includes:
  • a first acquisition submodule configured to acquire all virtual objects from the virtual objects of the same camp as the second virtual object
  • the second acquisition sub-module is configured to acquire a partial virtual object as a second virtual object from the virtual object of the same camp, wherein the partial virtual object has an association relationship with the first virtual object.
  • the first client may send a full-time message, or may send a friend message like a configured fixed friend.
  • Sending a full-person message can send a message to all other users.
  • Sending a friend message can form a group for multiple friends.
  • send a friend message send a friend message to a friend in a group at once, or send a friend to a fixed friend. Message.
  • the virtual character that belongs to the same camp as the first virtual character is used as the second virtual character, or the partial virtual role that belongs to the same camp as the first virtual character is used as the second virtual role, so that the second virtual character can be flexibly determined.
  • Two virtual characters make information interaction more flexible.
  • the sending unit further includes:
  • a second determining module configured to determine a third virtual object from the virtual task, wherein the third virtual object and the first virtual object are virtual objects of different camps;
  • the second sending module is configured to send the second interaction information to the third client where the third virtual object is located, where the second interaction information matches the second emotion, and the second emotion and the first emotion are different emotions .
  • FIG. 5 is the first interaction information received by the second client, and the first interaction information received by the third client in FIG. 11 is visible, because the third virtual terminal of the third client The object is different from the first virtual object of the first client, so the message displayed by the third client is different from the message displayed by the second client.
  • FIG. 5 is the first interaction information received by the second client, and the first interaction information received by the third client in FIG. 11 is visible, because the third virtual terminal of the third client The object is different from the first virtual object of the first client, so the message displayed by the third client is different from the message displayed by the second client.
  • the extracting unit includes: a first collecting module, configured to collect a facial image of the target object by using an image capturing device in the terminal where the first client is located; and the first extracting module is configured to extract the facial surface of the target object from the facial image feature;
  • the recognition unit includes: an identification module configured to recognize the first emotion of the target object based on the extracted facial features.
  • the identification module includes:
  • a first lookup submodule configured to find an emotion identifier that matches the extracted facial feature
  • the first determining sub-module is configured to use the emotion represented by the found emotion identifier as the first emotion.
  • the image capturing device may be a camera on the mobile terminal.
  • the facial features described above may be features of facial organs such as eyebrows, forehead, eyes, and face.
  • the above biometrics may be facial expressions of the user or sound information.
  • the user collects a facial image through the collecting device, analyzes the collected facial image, and extracts facial features of the user, such as eyebrows, eyes, and mouth, according to the characteristics of each facial feature, correspondingly obtaining the user's first mood.
  • the face image is cut off from the facial image according to the face detection algorithm, and according to the facial feature extraction and the expression classification method, the ratio of the cut face graphic is not the same. If the face picture is a dynamic picture, the face features need to be tracked.
  • the cut face image is subjected to collective processing or gradation processing, and then facial features are extracted to recognize the expression.
  • the facial features are extracted according to the facial image of the target object, and the first emotion is acquired according to the facial features, so that the first emotion of the target object can be directly obtained according to the facial features, thereby reducing the complexity of the information interaction.
  • the image capturing device may be a camera on the mobile terminal.
  • the facial features described above may be features of facial organs such as eyebrows, forehead, eyes, and face.
  • the extracting unit includes: a second collecting module configured to collect a sound signal of the target object by the sound collecting device in the terminal where the first client is located; and the second collecting module is configured to extract the sound of the target object from the sound signal feature;
  • the identification unit includes: a second identification module configured to recognize the first emotion of the target object according to the extracted sound feature.
  • the second identification module includes:
  • a third acquisition sub-module configured to acquire a pre-configured target audio feature, wherein the target audio feature is used to trigger the first interaction information
  • a fourth obtaining submodule configured to acquire an emotion identifier corresponding to the target audio feature if the similarity between the sound feature and the target audio feature is higher than a predetermined threshold
  • the second determining sub-module is configured to use the emotion represented by the emotion identifier as the first emotion.
  • the above sound signal can be a sound emitted by the user.
  • the sound collecting device collects the user's voice, compares the collected sound with the target audio feature, obtains an emotional identifier corresponding to the target audio feature, and obtains the first emotion.
  • the sound characteristics in the received sound signal are "punched” and compared with the target audio features.
  • the similarity between the obtained sound feature and the target audio feature is 80%. If the similarity exceeds the predetermined threshold by 60%, the corresponding emotion identifier is obtained according to the target audio feature, and the emotion identifier is excited, indicating that the user is currently very excited.
  • the above-mentioned target audio feature may be any sound signal collected by the sound collecting device, and the sound feature may be acquired by any algorithm.
  • the target audio feature may be acquired by an advanced setting method, and the above-mentioned emotion identifier may be other words.
  • the target audio feature may also be a feature such as a timbre, a pitch, and a sound intensity of the sound.
  • the obtained voice information is compared with the tone, pitch, and sound intensity of the target audio feature, thereby obtaining a corresponding emotion identifier.
  • the input voice is recognized by at least two voice recognition branches.
  • the two speech recognition results recognized by the two speech recognition branches are identical, the recognized result can be output.
  • the two speech recognition results recognized by the two speech recognition branches are inconsistent, the user is prompted to re-enter the speech signal.
  • At least two speech recognition results may be processed according to a minority obeying majority principle or a weighting algorithm or a combination of the two to obtain a speech recognition result. And output the speech recognition result.
  • the above speech recognition branch may be implemented using a statistically based implicit Markov model recognition or training algorithm or a combination of both.
  • the emotion identifier corresponding to the target audio feature is acquired, and the emotion identified by the emotion identifier is taken as the first Emotions, so that the corresponding first emotion can be obtained according to the voice information, thereby reducing the complexity of the information interaction.
  • the determining unit includes:
  • an obtaining module configured to obtain an emotional identifier of the first emotion
  • a lookup module configured to find first interaction information that matches the emotion identifier of the first emotion.
  • the corresponding relationship between the emotion identifier of the first emotion and the first interaction information may be preset, and the corresponding first interaction is searched from the correspondence between the preset emotion identifier and the first interaction information according to the acquired emotion identifier. Information, thereby obtaining the first interaction information and transmitting the first interaction information.
  • the first interaction information is searched according to the correspondence between the emotion identifier and the first interaction information, so that the first interaction information can be sent, and the efficiency of information interaction is improved.
  • the searching module includes:
  • a fifth obtaining submodule configured to acquire, when the emotion identifier is indicated as the first emotion type, first interaction information that matches the first emotion type, wherein the first matching the first emotion type
  • the interaction information is used to request assistance for the first virtual object
  • a sixth obtaining submodule configured to acquire, when the emotion identifier is indicated as the second emotion type, first interaction information that matches the second emotion type, wherein the first matching the second emotion type The interaction information is used to encourage the second virtual object;
  • a seventh obtaining submodule configured to acquire first interaction information that matches the third emotion type if the emotion identifier is indicated as the third emotion type, wherein the first matching the third emotion type
  • the interaction information is used to issue an inquiry request to the second virtual object.
  • the first type of emotion described above may be tension, excitement, doubt, and the like.
  • the first interaction information may be text information, such as "save me”, “refuel, we can do it!, "are you sure?".
  • the first interaction information that matches the first emotion information may be “Save Me”; when the emotion identifier indicates that the first emotion type is tense, the first interaction information that matches the first interaction information may be To "save me”; when the emotion sign indicates the first emotion type such as excitement, the first interaction information that matches it may be "refuel, we can do it!; when the emotion flag indicates the first emotion type such as question The first interactive information that matches it can be "Are you sure?" to indicate the question.
  • the content of the first interaction information is determined according to the type of the emotion identifier, thereby further reducing the complexity of information interaction and improving the flexibility of information interaction.
  • the determining unit includes at least one of the following:
  • a third determining module configured to determine text information that matches the first emotion
  • a fourth determining module configured to determine image information that matches the first mood
  • a fifth determining module configured to determine audio information that matches the first emotion.
  • FIG. 12 and FIG. 13 are second clients.
  • the message sent by the first client to the second client may be a voice message or an image message.
  • a voice message is shown in FIG. 13 as an image message.
  • FIG. 12 and FIG. 13 are only examples, and do not constitute a limitation on the present application.
  • Embodiments of the present application also provide a storage medium having stored therein a computer program, wherein the computer program is configured to execute the steps of any one of the method embodiments described above.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the first interaction information is sent to the second client where the second virtual object is located, where the second virtual object performs the virtual task together with the first virtual object.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • S2 Obtain a partial virtual object from the virtual object of the same camp as a second virtual object, where the partial virtual object has an association relationship with the first virtual object.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • Extracting a biometric feature of the target object includes: collecting a facial image of the target object by using an image acquiring device in the terminal where the first client is located; and extracting a facial feature of the target object from the facial image;
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be arranged to store a computer program for performing the following steps:
  • the emotion represented by the emotion identifier is taken as the first emotion.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • S2 Find first interaction information that matches the emotion identifier of the first emotion.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the storage medium is further configured to store a computer program for performing the steps included in the method in the above embodiments, which will not be described in detail in this embodiment.
  • the storage medium may include a flash disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.
  • an electronic device for implementing the above information interaction method includes a processor 1502, a memory 1504, a transmission device 1506, and a display 1508.
  • a memory program is stored in the memory 1504, the processor being arranged to perform the steps of any of the above method embodiments by a computer program.
  • the transmission device 1506 is configured to transmit the collected facial picture and voice information, etc.
  • the display 1508 is configured to display the first interaction information and the like.
  • the foregoing electronic device may be located in at least one network device of the plurality of network devices of the computer network.
  • the processor 1502 may be configured to perform the following steps by using a computer program:
  • the first interaction information is sent to the second client where the second virtual object is located, where the second virtual object performs the virtual task together with the first virtual object.
  • the structure shown in FIG. 15 is only schematic, and the electronic device can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (Mobile).
  • Internet Devices referred to as MID
  • PAD PAD
  • Fig. 15 does not limit the structure of the above electronic device.
  • the electronic device may further include more or less components (such as a network interface or the like) as shown in FIG. 15, or have a different configuration from that shown in FIG.
  • the memory 1504 can be used to store software programs and modules, such as program instructions/modules corresponding to the information interaction method in the embodiment of the present application.
  • the processor 1502 performs various functions by running software programs and modules stored in the memory 1504. Application and data processing, that is, the above information interaction method is implemented.
  • Memory 1504 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 1504 can further include memory remotely located relative to processor 1502, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 1506 described above is for receiving or transmitting data via a network.
  • Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 1506 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers through a network cable to communicate with the Internet or a local area network.
  • the transmission device 1506 is a Radio Frequency (RF) module for communicating wirelessly with the Internet.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 1504 is configured to store information such as the first interaction information and the extracted biometrics.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product, or the whole or part of the technical solution, which is stored in the storage medium, including
  • the instructions are used to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented either in the form of hardware or in the form of a software functional unit.
  • the biometrics of the target object are extracted; the current first emotion of the target object is identified according to the extracted biometrics; and the first to be interacted with the first emotion is determined.
  • the second client avoids the problem that the application task executed by the control object controlled by the application client must be interrupted to complete the information interaction with the target object, so that the information interaction can be completed in the process of controlling the object execution of the application character. Achieve the technical effect of reducing the complexity of interactive operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种信息交互方法和装置、存储介质及电子装置。其中,该方法包括:终端提取目标对象的生物特征,其中,目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;终端根据提取出的生物特征识别出目标对象当前的第一情绪;终端确定与第一情绪相匹配的待交互的第一交互信息;终端向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。本申请解决了相关信息交互方法所存在的交互操作复杂度较高的技术问题。

Description

信息交互方法和装置、存储介质及电子装置
本申请要求于2018年2月11日提交中国专利局、优先权号为201810142618X、申请名称为“信息交互方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种信息交互方法和装置、存储介质及电子装置。
背景技术
为了在终端所运行的应用客户端中实现实时地信息交互,通常会在应用客户端所显示的操作界面中设置输入插件,通过该输入插件获取用户输入的信息,然后将上述信息发送给所要交互的目标对象,以完成信息交互。
然而,如今有很多终端应用都需要用户双手操作,来完成预先设置的应用任务。也就是说,在采用上述相关技术提供的信息交互方法时,往往需要先中断被应用客户端控制的控制对象所执行的应用任务,在利用被激活的输入插件完成与目标对象之间的信息交互之后,再恢复该控制对象所执行的应用任务。换言之,与目标对象之间的信息交互过程,存在着交互操作复杂度较高的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供一种信息交互方法和装置、存储介质及电子装置,以至少解决相关信息交互方法所存在的交互操作复杂度较高的技术问题。
根据本申请实施例的一个方面,提供了一种信息交互方法,包括:终 端提取目标对象的生物特征,其中,目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;终端根据提取出的生物特征识别出目标对象当前的第一情绪;终端确定与第一情绪相匹配的待交互的第一交互信息;终端向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。
根据本申请实施例的另一方面,还提供了一种信息交互装置,应用于终端中,包括:提取单元,设置为提取目标对象的生物特征,其中,上述目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;识别单元,设置为根据提取出的上述生物特征识别出上述目标对象当前的第一情绪;确定单元,设置为确定与上述第一情绪相匹配的待交互的第一交互信息;发送单元,设置为向第二虚拟对象所在第二客户端发送上述第一交互信息,其中,上述第二虚拟对象与上述第一虚拟对象共同执行上述虚拟任务。
根据本申请实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述信息交互方法。
在本申请实施例中,终端通过提取目标对象的生物特征;并根据提取出的所述生物特征识别出所述目标对象当前的第一情绪;终端确定与所述第一情绪相匹配的待交互的第一交互信息;终端向第二虚拟对象所在第二客户端发送所述第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而可以在控制对象执行应用人物的过程中完成信息的交互,实现了降低交互操作复杂度的技术效果,进而解决了相关信息交互方法所存在的交互操作复杂度较高的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一 部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种可选的信息交互方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的信息交互方法的流程示意图;
图3是根据本申请实施例的一种可选的信息交互方法的示意图;
图4是根据本申请实施例的另一种可选的信息交互方法的示意图;
图5是根据本申请实施例的又一种可选的信息交互方法的示意图;
图6是根据本申请实施例的又一种可选的信息交互方法的示意图;
图7是根据本申请实施例的又一种可选的信息交互方法的示意图;
图8是根据本申请实施例的又一种可选的信息交互方法的示意图;
图9是根据本申请实施例的又一种可选的信息交互方法的示意图;
图10是根据本申请实施例的又一种可选的信息交互方法的示意图;
图11是根据本申请实施例的又一种可选的信息交互方法的示意图;
图12是根据本申请实施例的又一种可选的信息交互方法的示意图;
图13是根据本申请实施例的又一种可选的信息交互方法的示意图;
图14是根据本申请实施例的一种可选的信息交互装置的结构示意图;
图15是根据本申请实施例的一种可选的电子装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动 前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种信息交互方法,可选地,上述信息交互方法可以但不限于应用于如图1所示的环境中。
如图1所示,终端102通过终端上携带的用于识别用户生物特征的识别装置识别人的面部特征,或者通过声音采集装置采集用户的声音特征。根据采集到的生物特征,识别出目标对象的第一情绪,确定与第一情绪相匹配的带交互的第一交互信息,并将第一交互信息通过网络104发送给第二虚拟对象所在的第二终端106,第二终端106接收到第一交互信息后,在第二客户端上显示第一交互信息。其中,第一客户端位于第一终端102,第二客户端位于第二终端106。
可选地,在本实施例中,上述第一客户端与第二客户端可以包括但不限于以下至少之一:手机、平板电脑、笔记本电脑及其他可以提取目标对象的生物特征的移动硬件设备。上述网络可以包括但不限于无线网络,其中,该无线网络包括:蓝牙、WIFI及其他实现无线通信的网络。上述只是一种示例,本实施例对此不做任何限定。
可选地,在本实施例中,作为一种可选的实施方式,如图2所示,上述信息交互方法可以包括:
S202,终端提取目标对象的生物特征,其中,目标对象通过第一客户 端控制第一虚拟对象执行虚拟任务;
S204,终端根据提取出的生物特征识别出目标对象当前的第一情绪;
S206,终端确定与第一情绪相匹配的待交互的第一交互信息;
S208,终端向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。
可选地,上述信息交互方法可以但不限于应用于游戏领域或者仿真训练领域。以游戏领域为例,上述第一客户端可以为一位用户使用的终端,第二客户端可以为另一位用户使用的终端。第一虚拟对象可以为第一客户端控制的虚拟对象,第二虚拟对象可以为第二客户端控制的虚拟对象。当用户使用的终端提取到用户的生物特征后,根据提取出的生物特征识别出用户的当前第一情绪,例如愤怒、紧张、激动等等。在识别出当前第一情绪后,终端确定与当前第一情绪匹配的第一交互信息,并将第一交互信息发送至另一用户所使用的第二客户端。
通过上述方法,通过终端提取目标对象的生物特征;终端根据提取出的生物特征识别出目标对象当前的第一情绪;终端确定与第一情绪相匹配的待交互的第一交互信息;终端向第二虚拟对象所在第二客户端发送第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而可以在控制对象执行应用人物的过程中完成信息的交互,实现了降低交互操作复杂度的技术效果,解决了相关技术中存在的交互操作复杂度高的问题。
可选地,可以通过第一客户端所在终端的图像采集设备采集目标对象的面部画面,从面部画面中提取出目标对象的面部特征;终端根据提取的面部特征查找与面部特征对应的情绪标识,将情绪标识所表示的情绪作为第一情绪。
例如,结合图3进行说明。上述生物特征可以为用户的面部表情或者为声音信息。如图3所示,通过采集装置采集用户的面部画面,对采集到的面部画面进行分析,提取用户的面部特征,例如眉毛,眼睛,嘴巴,根据各个面部特征的特点,对应得到用户的第一情绪。
可选地,可以通过第一客户端所在的终端的声音采集设备采集目标对象的声音信号,并从声音信号中提取出目标对象的声音特征。终端根据提取出的声音特征与预先配置的目标音频特征作对比,在声音特征与目标音频特征相似度高于预定阈值的情况下,获取目标音频特征对应的情绪标识,将情绪标识所表示的情绪作为第一情绪。
例如,结合表1与图4进行说明。上述声音信号可以为用户发出的声音。如图4所示,当用户发出声音后,声音采集装置将用户的声音采集,并对采集到的声音与目标音频特征做对比,得到与目标音频特征对应的情绪标识,并得到第一情绪。如表1所示,接收到声音采集设备采集到的目标对象的声音信号如“兄弟们冲啊!”等信息后,将接收到的声音信号中的声音特征“冲”与目标音频特征做比较,得到声音特征与目标音频特征的相似度为80%。相似度超过预定阈值60%,则根据目标音频特征获取对应的情绪标识,情绪标识为激动,表示用户当前十分激动。
表1
Figure PCTCN2018119356-appb-000001
需要说明的是,表1中的内容仅为了解释说明,并不构成对本申请的 限定。上述目标音频特征可以为声音采集设备采集到的任何声音信号,上述声音特征可以采用任何算法采集,上述目标音频特征可以通过提前设置的方法获取,上述情绪标识可以为其他词汇。
需要说明的是,上述目标音频特征也可以为声音的音色、音调、声音强度等特征。当获取到用户的声音信息后,将获取到的声音信息于目标音频特征的音色、音调、声音强度作对比,从而得到对应的情绪标识。
以下以游戏为例,结合图5、6进行说明。在一局游戏中,第一客户端所在终端通过终端上携带的采集装置采集用户的面部画面与声音信息。终端将采集到的面部画面进行分析,得到面部特征,对声音信息进行分析,得到声音特征。根据面部特征与声音特征得到对应的情绪标识,从而得到用户的第一情绪。之后根据得到的第一情绪,对应得到第一交互信息,并在第二客户端上显示第一交互信息。显示结果如图5所示。
需要说明的是,当在第二客户端显示第一交互信息时,可以在第一客户端显示第一交互信息也可以不显示。图6为第一客户端显示第一交互信息的示例。
可选地,终端可以但不限于将与第一虚拟对象为相同阵营的虚拟对象确定为第二虚拟对象,将与第一虚拟对象为不同阵营的虚拟对象确定为第三虚拟对象。
可选地,第二虚拟对象可以为一个或多个与第一虚拟对象属于同一阵营的虚拟对象,第三虚拟对象可以为一个或多个与第一虚拟对象属于不同阵营的虚拟对象。第二虚拟对象与第一虚拟对象可以为队友关系,第三虚拟对象与第一虚拟对象可以为不同小队等。
可选地,可以使用以下方法确定第二虚拟对象或者第三虚拟对象:
1)终端根据虚拟对象的身份信息将虚拟对象划分为第二虚拟对象或者第三虚拟对象;
2)终端根据虚拟对象的任务目标将虚拟对象划分为第二虚拟对象或 者第三虚拟对象;
3)终端根据虚拟对象的位置将虚拟对象划分为第二虚拟对象或者第三虚拟对象。
例如,继续以游戏领域为例进行说明,上述身份信息可以为虚拟对象的性别、国籍等。例如,终端将与第一虚拟对象的国籍相同的虚拟对象设置为第二虚拟对象,将与第一虚拟对象国籍不同的虚拟对象设置为第三虚拟对象等。上述位置可以为虚拟对象的出生位置。例如,以出生位置为例,提前设置不同的虚拟对象的出生区域,将与第一虚拟对象出生区域相同的虚拟对象设置为第二虚拟对象,将与第一虚拟对象出生区域不同的虚拟对象设置为第三虚拟对象。上述虚拟对象的任务目标可以为虚拟对象的获胜条件。将与第一虚拟对象的获胜条件相同的虚拟对象划分为第二虚拟对象,将与第一虚拟对象获胜条件不同的虚拟对象划分为第三虚拟对象。
可选地,终端可以将与第一虚拟对象同属于相同阵营的所有虚拟对象作为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第一交互信息,或者将与第一虚拟对象属于相同阵营的部分虚拟对象作为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第二虚拟对象;以及向与第一虚拟对象属于不同阵营的第三虚拟对象所在的第三客户端发送第二交互信息。其中,第一交互信息与第一情绪匹配,第二交互信息与第二情绪匹配,第一情绪与第二情绪不同。
例如,继续以上述游戏为例,结合图7进行说明。如图7所示,可以在第一客户端上配置第一交互消息的发送范围,上述第一交互消息可以为全员消息或者好友消息。第一客户端可以发送全员消息,也可以像配置的固定好友发送好友消息。发送全员消息可以为向所有其他用户发送消息,发送好友消息可以为多个好友组成一个小组,当发送好友消息时,一次性向一个小组内的好友发送好友消息,或者向固定的一个好友发送好友消息。
例如如图8-10所示,图8中为第一客户端配置发送全员消息时,第二客户端上显示有第一客户端发送的全员消息,该全员消息为所有用户都 可以看到的消息。图9为当用户发送好友消息时,第二客户端能够看到用户发送的好友消息,但是该好友消息并不是所有用户都能看到,只有第一客户端配置的好友才能看到。可以通过设置全员消息与好友消息为不同的颜色或者带有不同的标志从而区分全员消息与好友消息。如图10所示,图10中的好友消息带有下划线,因此与全员消息分开。
可选地,当第一客户端发送消息后,第三虚拟对象所在的第三客户端接收到与第二客户端不同的消息。例如如图5与图11所示,图5为第二客户端接收到的第一交互信息,图11位第三客户端接收到的第一交互信息,可见,由于第三客户端的第三虚拟对象与第一客户端的第一虚拟对象为不同阵营,因此第三客户端显示的消息与第二客户端显示的消息不同。
可选地,在终端查找与第一情绪的情绪标识匹配的第一交互信息包括:在情绪标识指示为第一情绪类型的情况下,终端获取与第一情绪类型相匹配的第一交互信息,其中,与第一情绪类型相匹配的第一交互信息用于请求对第一虚拟对象进行帮助;在情绪标识指示为第二情绪类型的情况下,终端获取与第二情绪类型相匹配的第一交互信息,其中,与第二情绪类型相匹配的第一交互信息用于对第二虚拟对象进行鼓励提示;在情绪标识指示为第三情绪类型的情况下,终端获取与第三情绪类型相匹配的第一交互信息,其中,与第三情绪类型相匹配的第一交互信息用于向第二虚拟对象发出询问请求。
通过本实施例,通过终端提取目标对象的生物特征;终端根据提取出的生物特征识别出目标对象当前的第一情绪;终端确定与第一情绪相匹配的待交互的第一交互信息;终端向第二虚拟对象所在第二客户端发送第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而实现了降低交互操作复杂度的技术效果,解决了相关技术中交互操作复杂度高的技术问题。
作为一种可选的实施方案,终端向第二虚拟对象所在第二客户端发送第一交互信息包括:
S1,终端从虚拟任务中确定出第二虚拟对象,其中,第二虚拟对象与第一虚拟对象为相同阵营的虚拟对象;
S2,终端向第二虚拟对象所在的第二客户端发送第一交互信息。
可选地,上述第一交互信息可以为文本信息、图像信息或音频信息。以第一交互信息为文本信息,结合图5进行说明。图5中显示的客户端为第二客户端,第二客户端中的虚拟对象为第二虚拟对象。第一客户端上的第一虚拟对象与第二虚拟对象为队友关系,第二客户端左上角显示有第一客户端发送的消息。第二客户端可以知道第一客户端的第一虚拟对象的状态。
通过本实施例,通过终端将与第一虚拟对象为相同阵营的虚拟对象确定为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第一交互消息,从而只向同阵营的第二虚拟对象发送第一交互消息,进而提高了第一交互消息的发送灵活性。
作为一种可选的实施方案,终端从虚拟任务中确定出第二虚拟对象包括:
(1)终端从相同阵营的虚拟对象中获取全部虚拟对象作为第二虚拟对象;或者
(2)终端从相同阵营的虚拟对象中获取部分虚拟对象作为第二虚拟对象,其中,部分虚拟对象与第一虚拟对象具有关联关系。
例如,继续以上述游戏领域进行解释说明。如图7所示,图7中为第一客户端的配置界面,第一客户端可以发送全员消息,也可以像配置的固定好友发送好友消息。发送全员消息可以为向所有其他用户发送消息,发送好友消息可以为多个好友组成一个小组,当发送好友消息时,一次性向一个小组内的好友发送好友消息,或者向固定的一个好友发送好友消息。
通过本实施例,通过终端将与第一虚拟角色属于同一阵营的全部虚拟角色作为第二虚拟角色,或者将与第一虚拟角色属于同一阵营的部分虚拟角色作为第二虚拟角色,从而可以灵活决定第二虚拟角色,使信息交互更灵活。
作为一种可选的实施方案,终端在向第二虚拟对象所在第二客户端发送第一交互信息时,还包括:
S1,终端从虚拟任务中确定出第三虚拟对象,其中,第三虚拟对象与第一虚拟对象为不同阵营的虚拟对象;
S2,终端向第三虚拟对象所在的第三客户端发送第二交互信息,其中,第二交互信息与第二情绪相匹配,第二情绪与第一情绪为不同情绪。
例如如图5与图11所示,图5为第二客户端接收到的第一交互信息,图11位第三客户端接收到的第一交互信息,可见,由于第三客户端的第三虚拟对象与第一客户端的第一虚拟对象为不同阵营,因此第三客户端显示的消息与第二客户端显示的消息不同。
通过本实施例,通过终端确定第三虚拟对象,并向第三虚拟对象发送第二交互消息,从而提高了信息交互的灵活性,进一步降低了信息交互的复杂度。
作为一种可选的实施方案,
S1,终端提取目标对象的生物特征包括:终端通过第一客户端所在终端中的图像采集设备采集目标对象的面部画面;终端从面部画面中提取出目标对象的面部特征;
S2,终端根据提取出的生物特征识别出目标对象当前的第一情绪包括:终端根据提取出的面部特征识别出目标对象的第一情绪。
其中,终端根据提取出的面部特征识别出目标对象的第一情绪包括:
S1,终端查找与提取出的面部特征匹配的情绪标识;
S2,终端将查找到的情绪标识所表示的情绪作为第一情绪。
可选地,上述图像采集设备可以为移动终端上的摄像头。上述面部特征可以为眉毛、额头、眼睛、脸部等面部器官的特征。
例如,结合图3与表2进行说明。上述生物特征可以为用户的面部表情或者为声音信息。如图3所示,通过采集装置采集用户的面部画面,对采集到的面部画面进行分析,提取用户的面部特征,例如眉毛,眼睛,嘴巴,根据各个面部特征的特点,对应得到用户的第一情绪。
表2中示出了一种可选的面部特征与第一情绪的对应关系。
表2
Figure PCTCN2018119356-appb-000002
需要说明的是,上述画面采集装置为摄像头仅为一种可选的示例,并不构成对本申请的限定。
可选地,在终端通过摄像头获取到面部画面后,根据人脸检测算法将人脸图像从面部画面上剪切下来,根据面部特征提取与表情分类方法的不同,剪切的人脸图形比例也不相同。若面部画面为动态图,则需要对面部特征进行追踪。对剪切的面部画面进行集合处理或者灰度处理,然后提取面部特征,识别表情。
通过本实施例,通过终端根据目标对象的面部画面,提取面部特征, 根据面部特征获取第一情绪,从而可以直接根据面部特征获取目标对象的第一情绪,从而降低了信息交互的复杂度。
作为一种可选的实施方案,
S1,终端提取目标对象的生物特征包括:终端通过第一客户端所在终端中的声音采集设备采集目标对象的声音信号;终端从声音信号中提取出目标对象的声音特征;
S2,终端根据提取出的生物特征识别出目标对象当前的第一情绪包括:终端根据提取出的声音特征识别出目标对象的第一情绪。
其中,终端根据提取出的声音特征识别出目标对象的第一情绪包括:
S1,终端获取预先配置的目标音频特征,其中,目标音频特征用于触发第一交互信息;
S2,在声音特征与目标音频特征之间的相似度高于预定阈值的情况下,获终端取与目标音频特征对应的情绪标识;
S3,终端将情绪标识所表示的情绪作为第一情绪。
例如,结合上述表1与图4进行说明。上述声音信号可以为用户发出的声音。如图4所示,当用户发出声音后,声音采集装置将用户的声音采集,并对采集到的声音与目标音频特征做对比,得到与目标音频特征对应的情绪标识,并得到第一情绪。如表1所示,接收到声音采集设备采集到的目标对象的声音信号如“兄弟们冲啊!”等信息后,将接收到的声音信号中的声音特征“冲”与目标音频特征做比较,得到声音特征与目标音频特征的相似度为80%。相似度超过预定阈值60%,则根据目标音频特征获取对应的情绪标识,情绪标识为激动,表示用户当前十分激动。
需要说明的是,上述表1中的内容仅为了解释说明,并不构成对本申请的限定。上述目标音频特征可以为声音采集设备采集到的任何声音信号,上述声音特征可以采用任何算法采集,上述目标音频特征可以通过提前设 置的方法获取,上述情绪标识可以为其他词汇。
需要说明的是,上述目标音频特征也可以为声音的音色、音调、声音强度等特征。当获取到用户的声音信息后,将获取到的声音信息于目标音频特征的音色、音调、声音强度作对比,从而得到对应的情绪标识。可选地,在对接收到的声音信号进行分析时,通过至少两条语音识别支路识别输入语音。在两条语音识别支路识别到的两种语音识别结果一致时,才能够输出识别出的结果。在两条语音识别支路识别到的两种语音识别结果不一致时,则提示用户重新输入语音信号。
可选地,当至少两条语音识别支路识别的语音识别结果不一致时,终端还可以根据少数服从多数原则或者加权算法或者两者的结合对至少两种语音识别结果进行处理,得到一个语音识别结果,并输出语音识别结果。
可选地,上述语音识别支路可以采用基于统计的隐含马尔可夫模型识别或训练算法或者两者的结合来实现。
通过本实施例,通过预先配置目标音频特征,在目标音频特征与声音特征的相似度高于预定阈值的情况下,终端获取与目标音频特征对应的情绪标识,将情绪标识所标识的情绪作为第一情绪,从而可以根据语音信息得到对应的第一情绪,从而降低了信息交互的复杂度。
作为一种可选的实施方案,终端确定与目标对象当前的第一情绪相匹配的待交互的第一交互信息包括:
S1,终端获取第一情绪的情绪标识;
S2,终端查找与第一情绪的情绪标识相匹配的第一交互信息。
可选地,可以预先设定第一情绪的情绪标识与第一交互信息的对应关系,根据获取的情绪标识从预先设定的情绪标识与第一交互信息的对应关系中查找对应的第一交互信息,从而获取到第一交互信息,并发送第一交互信息。
通过本实施例,通过终端获取到情绪标识后,根据情绪标识与第一交互信息的对应关系,查找第一交互信息,从而可以发送第一交互信息,提高了信息交互的效率。
作为一种可选的实施方案,终端查找与第一情绪的情绪标识相匹配的第一交互信息包括:
(1)在情绪标识指示为第一情绪类型的情况下,终端获取与第一情绪类型相匹配的第一交互信息,其中,与第一情绪类型相匹配的第一交互信息用于请求对第一虚拟对象进行帮助;
(2)在情绪标识指示为第二情绪类型的情况下,终端获取与第二情绪类型相匹配的第一交互信息,其中,与第二情绪类型相匹配的第一交互信息用于对第二虚拟对象进行鼓励提示;
(3)在情绪标识指示为第三情绪类型的情况下,终端获取与第三情绪类型相匹配的第一交互信息,其中,与第三情绪类型相匹配的第一交互信息用于向第二虚拟对象发出询问请求。
例如,继续结合上述游戏进行说明。当获取到的情绪标识的类型不同时,第一交互信息所携带的内容不同。上述第一情绪类型可以为紧张、激动、疑问等。上述第一交互信息可以为文本信息,例如“救我”、“加油,我们能行!”、“你确定?”。当情绪标识指示为第一情绪类型如紧张时,与之匹配的第一交互信息可以为“救我”;当情绪标识指示为第一情绪类型如紧张时,与之匹配的第一交互信息可以为“救我”;当情绪标识指示为第一情绪类型如激动时,与之匹配的第一交互信息可以为“加油,我们能行!”;当情绪标识指示为第一情绪类型如疑问时,与之匹配的第一交互信息可以为“你确定?”用于表示疑问。
通过本实施例,通过终端根据情绪标识的类型决定第一交互信息的内容,从而进一步降低了信息交互的复杂度,提高了信息交互的灵活性。
作为一种可选的实施方案,终端确定与目标对象当前的第一情绪相匹 配的待交互的第一交互信息包括以下至少之一:
(1)终端确定与第一情绪相匹配的文本信息;
(2)终端确定与第一情绪相匹配的图像信息;
(3)终端确定与第一情绪相匹配的音频信息。
例如,继续以上述游戏为例,结合图12-13进行说明,图12与图13为第二客户端。在第一客户端向第二客户端发送的消息可以为语音消息或者图像消息。如图12所示为语音消息,如图13所示为图像消息。
需要说明的是,图12与图13仅为示例,并不构成对本申请的限定。
通过本实施例,通过终端为第一交互信息设置不同的类型,从而提升了信息交互的灵活性,进一步降低了信息交互的复杂度。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例的方法。
根据本申请实施例的另一个方面,还提供了一种用于实施上述信息交 互方法的信息交互装置。应用于终端中。在本实施例中,作为一种可选的实施方式,如图14所示,上述信息交互装置可以包括:
(1)提取单元1402,设置为提取目标对象的生物特征,其中,目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;
(2)识别单元1404,设置为根据提取出的生物特征识别出目标对象当前的第一情绪;
(3)确定单元1406,设置为确定与第一情绪相匹配的待交互的第一交互信息;
(4)发送单元1408,设置为向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。
可选地,上述信息交互装置可以但不限于应用于游戏领域或者仿真训练领域。以游戏领域为例,上述第一客户端可以为一位用户使用的游戏设备,第二客户端可以为另一位用户使用的游戏设备。第一虚拟对象可以为第一客户端控制的虚拟对象,第二虚拟对象可以为第二客户端控制的虚拟对象。当用户使用的游戏设备提取到用户的生物特征后,根据提取出的生物特征识别出用户的当前第一情绪,例如愤怒、紧张、激动等等。在识别出当前第一情绪后,确定与当前第一情绪匹配的第一交互信息,并将第一交互信息发送至另一用户所使用的第二客户端。
通过上述方法,通过提取目标对象的生物特征;根据提取出的生物特征识别出目标对象当前的第一情绪;确定与第一情绪相匹配的待交互的第一交互信息;向第二虚拟对象所在第二客户端发送第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而可以在控制对象执行应用人物的过程中完成信息的交互,实现实现了降低交互操作复杂度的技术效果,解决了相关技术中存在的交互操作复杂度高 的问题。
可选地,上述第一交互信息可以但不限于为文本信息、图像信息、音频信息中的一个或多个。
可选地,可以通过第一客户端所在终端的图像采集设备采集目标对象的面部画面,从面部画面中提取出目标对象的面部特征;根据提取的面部特征查找与面部特征对应的情绪标识,将情绪标识所表示的情绪作为第一情绪。
例如,结合图3进行说明。上述生物特征可以为用户的面部表情或者为声音信息。如图3所示,通过采集装置采集用户的面部画面,对采集到的面部画面进行分析,提取用户的面部特征,例如眉毛,眼睛,嘴巴,根据各个面部特征的特点,对应得到用户的第一情绪。
可选地,可以通过第一客户端所在的终端的声音采集设备采集目标对象的声音信号,并从声音信号中提取出目标对象的声音特征。根据提取出的声音特征与预先配置的目标音频特征做对比,在声音特征与目标音频特征相似度高于预定阈值的情况下,获取目标音频特征对应的情绪标识,将情绪标识所表示的情绪作为第一情绪。
例如,结合上述表1与图4进行说明。上述声音信号可以为用户发出的声音。如图4所示,当用户发出声音后,声音采集装置将用户的声音采集,并对采集到的声音与目标音频特征做对比,得到与目标音频特征对应的情绪标识,并得到第一情绪。如表1所示,接收到声音采集设备采集到的目标对象的声音信号如“兄弟们冲啊!”等信息后,将接收到的声音信号中的声音特征“冲”与目标音频特征做比较,得到声音特征与目标音频特征的相似度为80%。相似度超过预定阈值60%,则根据目标音频特征获取对应的情绪标识,情绪标识为激动,表示用户当前十分激动。
需要说明的是,上述表1中的内容仅为了解释说明,并不构成对本申请的限定。上述目标音频特征可以为声音采集设备采集到的任何声音信号, 上述声音特征可以采用任何算法采集,上述目标音频特征可以通过提前设置的方法获取,上述情绪标识可以为其他词汇。
需要说明的是,上述目标音频特征也可以为声音的音色、音调、声音强度等特征。当获取到用户的声音信息后,将获取到的声音信息于目标音频特征的音色、音调、声音强度作对比,从而得到对应的情绪标识。
以下以游戏为例,结合图5、6进行说明。在一局游戏中,第一客户端所在终端通过终端上携带的采集装置采集用户的面部画面与声音信息。将采集到的面部画面进行分析,得到面部特征,对声音信息进行分析,得到声音特征。根据面部特征与声音特征得到对应的情绪标识,从而得到用户的第一情绪。之后根据得到的第一情绪,对应得到第一交互信息,并在第二客户端上显示第一交互信息。显示结果如图5所示。
需要说明的是,当在第二客户端显示第一交互信息时,可以在第一客户端显示第一交互信息也可以不显示。图6为第一客户端显示第一交互信息的示例。
可选地,可以但不限于将与第一虚拟对象为相同阵营的虚拟对象确定为第二虚拟对象,将与第一虚拟对象为不同阵营的虚拟对象确定为第三虚拟对象。
可选地,第二虚拟对象可以为一个或多个与第一虚拟对象属于同一阵营的虚拟对象,第三虚拟对象可以为一个或多个与第一虚拟对象属于不同阵营的虚拟对象。第二虚拟对象与第一虚拟对象可以为队友关系,第三虚拟对象与第一虚拟对象可以为不同小队等。
可选地,可以使用以下方法确定第二虚拟对象或者第三虚拟对象:
1)根据虚拟对象的身份信息将虚拟对象划分为第二虚拟对象或者第三虚拟对象;
2)根据虚拟对象的任务目标将虚拟对象划分为第二虚拟对象或者第三虚拟对象;
3)根据虚拟对象的位置将虚拟对象划分为第二虚拟对象或者第三虚拟对象。
例如,继续以游戏领域为例进行说明,上述身份信息可以为虚拟对象的性别、国籍等。例如,将与第一虚拟对象的国籍相同的虚拟对象设置为第二虚拟对象,将与第一虚拟对象国籍不同的虚拟对象设置为第三虚拟对象等。上述位置可以为虚拟对象的出生位置。例如,以出生位置为例,提前设置不同的虚拟对象的出生区域,将与第一虚拟对象出生区域相同的虚拟对象设置为第二虚拟对象,将与第一虚拟对象出生区域不同的虚拟对象设置为第三虚拟对象。上述虚拟对象的任务目标可以为虚拟对象的获胜条件。将与第一虚拟对象的获胜条件相同的虚拟对象划分为第二虚拟对象,将与第一虚拟对象获胜条件不同的虚拟对象划分为第三虚拟对象。
可选地,可以将与第一虚拟对象同属于相同阵营的所有虚拟对象作为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第一交互信息,或者将与第一虚拟对象属于相同阵营的部分虚拟对象作为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第二虚拟对象;以及向与第一虚拟对象属于不同阵营的第三虚拟对象所在的第三客户端发送第二交互信息。其中,第一交互信息与第一情绪匹配,第二交互信息与第二情绪匹配,第一情绪与第二情绪不同。
例如,继续以上述游戏为例,结合图7进行说明。如图7所示,可以在第一客户端上配置第一交互消息的发送范围,上述第一交互消息可以为全员消息或者好友消息。第一客户端可以发送全员消息,也可以像配置的固定好友发送好友消息。发送全员消息可以为向所有其他用户发送消息,发送好友消息可以为多个好友组成一个小组,当发送好友消息时,一次性向一个小组内的好友发送好友消息,或者向固定的一个好友发送好友消息。
例如如图8-10所示,图8中为第一客户端配置发送全员消息时,第二客户端上显示有第一客户端发送的全员消息,该全员消息为所有用户都可以看到的消息。图9为当用户发送好友消息时,第二客户端能够看到用 户发送的好友消息,但是该好友消息并不是所有用户都能看到,只有第一客户端配置的好友才能看到。可以通过设置全员消息与好友消息为不同的颜色或者带有不同的标志从而区分全员消息与好友消息。如图10所示,图10中的好友消息带有下划线,因此与全员消息分开。
可选地,当第一客户端发送消息后,第三虚拟对象所在的第三客户端接收到与第二客户端不同的消息。例如如图5与图11所示,图5为第二客户端接收到的第一交互信息,图11位第三客户端接收到的第一交互信息,可见,由于第三客户端的第三虚拟对象与第一客户端的第一虚拟对象为不同阵营,因此第三客户端显示的消息与第二客户端显示的消息不同。
可选地,在查找与第一情绪的情绪标识匹配的第一交互信息包括:在情绪标识指示为第一情绪类型的情况下,获取与第一情绪类型相匹配的第一交互信息,其中,与第一情绪类型相匹配的第一交互信息用于请求对第一虚拟对象进行帮助;在情绪标识指示为第二情绪类型的情况下,获取与第二情绪类型相匹配的第一交互信息,其中,与第二情绪类型相匹配的第一交互信息用于对第二虚拟对象进行鼓励提示;在情绪标识指示为第三情绪类型的情况下,获取与第三情绪类型相匹配的第一交互信息,其中,与第三情绪类型相匹配的第一交互信息用于向第二虚拟对象发出询问请求。
通过本实施例,通过通过提取目标对象的生物特征;根据提取出的生物特征识别出目标对象当前的第一情绪;确定与第一情绪相匹配的待交互的第一交互信息;向第二虚拟对象所在第二客户端发送第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而实现了降低交互操作复杂度的技术效果,解决了相关技术中交互操作复杂度高的技术问题。
作为一种可选的实施方式,发送单元包括:
(1)第一确定模块,设置为从虚拟任务中确定出第二虚拟对象,其 中,第二虚拟对象与第一虚拟对象为相同阵营的虚拟对象;
(2)第一发送模块,设置为向第二虚拟对象所在的第二客户端发送第一交互信息。
可选地,上述第一交互信息可以为文本信息、图像信息或音频信息。以第一交互信息为文本信息,结合图5进行说明。图5中显示的客户端为第二客户端,第二客户端中的虚拟对象为第二虚拟对象。第一客户端上的第一虚拟对象与第二虚拟对象为队友关系,第二客户端左上角显示有第一客户端发送的消息。第二客户端可以知道第一客户端的第一虚拟对象的状态。
通过本实施例,通过将与第一虚拟对象为相同阵营的虚拟对象确定为第二虚拟对象,并向第二虚拟对象所在的第二客户端发送第一交互消息,从而只向同阵营的第二虚拟对象发送第一交互消息,进而提高了第一交互消息的发送灵活性。
作为一种可选的实施方式,第一确定模块包括:
(1)第一获取子模块,设置为从相同阵营的虚拟对象中获取全部虚拟对象作为第二虚拟对象;或者
(2)第二获取子模块,设置为从相同阵营的虚拟对象中获取部分虚拟对象作为第二虚拟对象,其中,部分虚拟对象与第一虚拟对象具有关联关系。
例如,继续以上述游戏领域进行解释说明。如图7所示,图7中为第一客户端的配置界面,第一客户端可以发送全员消息,也可以像配置的固定好友发送好友消息。发送全员消息可以为向所有其他用户发送消息,发送好友消息可以为多个好友组成一个小组,当发送好友消息时,一次性向一个小组内的好友发送好友消息,或者向固定的一个好友发送好友消息。
通过本实施例,通过将与第一虚拟角色属于同一阵营的全部虚拟角色作为第二虚拟角色,或者将与第一虚拟角色属于同一阵营的部分虚拟角色 作为第二虚拟角色,从而可以灵活决定第二虚拟角色,使信息交互更灵活。
作为一种可选的实施方式,发送单元还包括:
(1)第二确定模块,设置为从虚拟任务中确定出第三虚拟对象,其中,第三虚拟对象与第一虚拟对象为不同阵营的虚拟对象;
(2)第二发送模块,设置为向第三虚拟对象所在的第三客户端发送第二交互信息,其中,第二交互信息与第二情绪相匹配,第二情绪与第一情绪为不同情绪。
例如如图5与图11所示,图5为第二客户端接收到的第一交互信息,图11位第三客户端接收到的第一交互信息,可见,由于第三客户端的第三虚拟对象与第一客户端的第一虚拟对象为不同阵营,因此第三客户端显示的消息与第二客户端显示的消息不同。
通过本实施例,通过确定第三虚拟对象,并向第三虚拟对象发送第二交互消息,从而提高了信息交互的灵活性,进一步降低了信息交互的复杂度。例如如图5与图11所示,图5为第二客户端接收到的第一交互信息,图11位第三客户端接收到的第一交互信息,可见,由于第三客户端的第三虚拟对象与第一客户端的第一虚拟对象为不同阵营,因此第三客户端显示的消息与第二客户端显示的消息不同。
通过本实施例,通过确定第三虚拟对象,并向第三虚拟对象发送第二交互消息,从而提高了信息交互的灵活性,进一步降低了信息交互的复杂度。
作为一种可选的实施方式,
(1)提取单元包括:第一采集模块,设置为通过第一客户端所在终端中的图像采集设备采集目标对象的面部画面;第一提取模块,设置为从面部画面中提取出目标对象的面部特征;
(2)识别单元包括:识别模块,设置为根据提取出的面部特征识别 出目标对象的第一情绪。
其中,识别模块包括:
(1)第一查找子模块,设置为查找与提取出的面部特征匹配的情绪标识;
(2)第一确定子模块,设置为将查找到的情绪标识所表示的情绪作为第一情绪。
可选地,上述图像采集设备可以为移动终端上的摄像头。上述面部特征可以为眉毛、额头、眼睛、脸部等面部器官的特征。
例如,结合图3与上述表2进行说明。上述生物特征可以为用户的面部表情或者为声音信息。如图3所示,通过采集装置采集用户的面部画面,对采集到的面部画面进行分析,提取用户的面部特征,例如眉毛,眼睛,嘴巴,根据各个面部特征的特点,对应得到用户的第一情绪。
需要说明的是,上述画面采集装置为摄像头仅为一种可选的示例,并不构成对本申请的限定。
可选地,在通过摄像头获取到面部画面后,根据人脸检测算法将人脸图像从面部画面上剪切下来,根据面部特征提取与表情分类方法的不同,剪切的人脸图形比例也不相同。若面部画面为动态图,则需要对面部特征进行追踪。对剪切的面部画面进行集合处理或者灰度处理,然后提取面部特征,识别表情。
通过本实施例,通过根据目标对象的面部画面,提取面部特征,根据面部特征获取第一情绪,从而可以直接根据面部特征获取目标对象的第一情绪,从而降低了信息交互的复杂度。可选地,上述图像采集设备可以为移动终端上的摄像头。上述面部特征可以为眉毛、额头、眼睛、脸部等面部器官的特征。
作为一种可选的实施方式,
(1)提取单元包括:第二采集模块,设置为通过第一客户端所在终端中的声音采集设备采集目标对象的声音信号;第二采集模块,设置为从声音信号中提取出目标对象的声音特征;
(2)识别单元包括:第二识别模块,设置为根据提取出的声音特征识别出目标对象的第一情绪。
其中,第二识别模块包括:
(1)第三获取子模块,设置为获取预先配置的目标音频特征,其中,目标音频特征用于触发第一交互信息;
(2)第四获取子模块,设置为在声音特征与目标音频特征之间的相似度高于预定阈值的情况下,获取与目标音频特征对应的情绪标识;
(3)第二确定子模块,设置为将情绪标识所表示的情绪作为第一情绪。
例如,结合上述表1与图4进行说明。上述声音信号可以为用户发出的声音。如图4所示,当用户发出声音后,声音采集装置将用户的声音采集,并对采集到的声音与目标音频特征做对比,得到与目标音频特征对应的情绪标识,并得到第一情绪。如表1所示,接收到声音采集设备采集到的目标对象的声音信号如“兄弟们冲啊!”等信息后,将接收到的声音信号中的声音特征“冲”与目标音频特征做比较,得到声音特征与目标音频特征的相似度为80%。相似度超过预定阈值60%,则根据目标音频特征获取对应的情绪标识,情绪标识为激动,表示用户当前十分激动。
需要说明的是,上述表1中的内容仅为了解释说明,并不构成对本申请的限定。上述目标音频特征可以为声音采集设备采集到的任何声音信号,上述声音特征可以采用任何算法采集,上述目标音频特征可以通过提前设置的方法获取,上述情绪标识可以为其他词汇。
需要说明的是,上述目标音频特征也可以为声音的音色、音调、声音强度等特征。当获取到用户的声音信息后,将获取到的声音信息于目标音 频特征的音色、音调、声音强度作对比,从而得到对应的情绪标识。可选地,在对接收到的声音信号进行分析时,通过至少两条语音识别支路识别输入语音。在两条语音识别支路识别到的两种语音识别结果一致时,才能够输出识别出的结果。在两条语音识别支路识别到的两种语音识别结果不一致时,则提示用户重新输入语音信号。
可选地,当至少两条语音识别支路识别的语音识别结果不一致时,还可以根据少数服从多数原则或者加权算法或者两者的结合对至少两种语音识别结果进行处理,得到一个语音识别结果,并输出语音识别结果。
可选地,上述语音识别支路可以采用基于统计的隐含马尔可夫模型识别或训练算法或者两者的结合来实现。
通过本实施例,通过预先配置目标音频特征,在目标音频特征与声音特征的相似度高于预定阈值的情况下,获取与目标音频特征对应的情绪标识,将情绪标识所标识的情绪作为第一情绪,从而可以根据语音信息得到对应的第一情绪,从而降低了信息交互的复杂度。
作为一种可选的实施方式,确定单元包括:
(1)获取模块,设置为获取第一情绪的情绪标识;
(2)查找模块,设置为查找与第一情绪的情绪标识相匹配的第一交互信息。
可选地,可以预先设定第一情绪的情绪标识与第一交互信息的对应关系,根据获取的情绪标识从预先设定的情绪标识与第一交互信息的对应关系中查找对应的第一交互信息,从而获取到第一交互信息,并发送第一交互信息。
通过本实施例,通过获取到情绪标识后,根据情绪标识与第一交互信息的对应关系,查找第一交互信息,从而可以发送第一交互信息,提高了信息交互的效率。
作为一种可选的实施方式,查找模块包括:
(1)第五获取子模块,设置为在情绪标识指示为第一情绪类型的情况下,获取与第一情绪类型相匹配的第一交互信息,其中,与第一情绪类型相匹配的第一交互信息用于请求对第一虚拟对象进行帮助;
(2)第六获取子模块,设置为在情绪标识指示为第二情绪类型的情况下,获取与第二情绪类型相匹配的第一交互信息,其中,与第二情绪类型相匹配的第一交互信息用于对第二虚拟对象进行鼓励提示;
(3)第七获取子模块,设置为在情绪标识指示为第三情绪类型的情况下,获取与第三情绪类型相匹配的第一交互信息,其中,与第三情绪类型相匹配的第一交互信息用于向第二虚拟对象发出询问请求。
例如,继续结合上述游戏进行说明。当获取到的情绪标识的类型不同时,第一交互信息所携带的内容不同。上述第一情绪类型可以为紧张、激动、疑问等。上述第一交互信息可以为文本信息,例如“救我”、“加油,我们能行!”、“你确定?”。当情绪标识指示为第一情绪类型如紧张时,与之匹配的第一交互信息可以为“救我”;当情绪标识指示为第一情绪类型如紧张时,与之匹配的第一交互信息可以为“救我”;当情绪标识指示为第一情绪类型如激动时,与之匹配的第一交互信息可以为“加油,我们能行!”;当情绪标识指示为第一情绪类型如疑问时,与之匹配的第一交互信息可以为“你确定?”用于表示疑问。
通过本实施例,通过根据情绪标识的类型决定第一交互信息的内容,从而进一步降低了信息交互的复杂度,提高了信息交互的灵活性。
作为一种可选的实施方式,确定单元包括以下至少之一:
(1)第三确定模块,设置为确定与第一情绪相匹配的文本信息;
(2)第四确定模块,设置为确定与第一情绪相匹配的图像信息;
(3)第五确定模块,设置为确定与第一情绪相匹配的音频信息。
例如,继续以上述游戏为例,结合图12-13进行说明,图12与图13为第二客户端。在第一客户端向第二客户端发送的消息可以为语音消息或者图像消息。如图12所示为语音消息,如图13所示为图像消息。
需要说明的是,图12与图13仅为示例,并不构成对本申请的限定。
通过本实施例,通过为第一交互信息设置不同的类型,从而提升了信息交互的灵活性,进一步降低了信息交互的复杂度。
本申请的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,提取目标对象的生物特征,其中,目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;
S2,根据提取出的生物特征识别出目标对象当前的第一情绪;
S3,确定与第一情绪相匹配的待交互的第一交互信息;
S4,向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,从虚拟任务中确定出第二虚拟对象,其中,第二虚拟对象与第一虚拟对象为相同阵营的虚拟对象;
S2,向第二虚拟对象所在的第二客户端发送第一交互信息。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,从相同阵营的虚拟对象中获取全部虚拟对象作为第二虚拟对象;
S2,从相同阵营的虚拟对象中获取部分虚拟对象作为第二虚拟对象,其中,部分虚拟对象与第一虚拟对象具有关联关系。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,从虚拟任务中确定出第三虚拟对象,其中,第三虚拟对象与第一虚拟对象为不同阵营的虚拟对象;
S2,向第三虚拟对象所在的第三客户端发送第二交互信息,其中,第二交互信息与第二情绪相匹配,第二情绪与第一情绪为不同情绪。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,提取目标对象的生物特征包括:通过第一客户端所在终端中的图像采集设备采集目标对象的面部画面;从面部画面中提取出目标对象的面部特征;
S2,根据提取出的面部特征识别出目标对象的第一情绪。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,查找与提取出的面部特征匹配的情绪标识;
S2,将查找到的情绪标识所表示的情绪作为第一情绪。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,通过第一客户端所在终端中的声音采集设备采集目标对象的声音信号;从声音信号中提取出目标对象的声音特征;
S2,根据提取出的声音特征识别出目标对象的第一情绪。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以 下步骤的计算机程序:
S1,获取预先配置的目标音频特征,其中,目标音频特征用于触发第一交互信息;
S2,在声音特征与目标音频特征之间的相似度高于预定阈值的情况下,获取与目标音频特征对应的情绪标识;
S3,将情绪标识所表示的情绪作为第一情绪。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取第一情绪的情绪标识;
S2,查找与第一情绪的情绪标识相匹配的第一交互信息。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,在情绪标识指示为第一情绪类型的情况下,获取与第一情绪类型相匹配的第一交互信息,其中,与第一情绪类型相匹配的第一交互信息用于请求对第一虚拟对象进行帮助;
S2,在情绪标识指示为第二情绪类型的情况下,获取与第二情绪类型相匹配的第一交互信息,其中,与第二情绪类型相匹配的第一交互信息用于对第二虚拟对象进行鼓励提示;
S3,在情绪标识指示为第三情绪类型的情况下,获取与第三情绪类型相匹配的第一交互信息,其中,与第三情绪类型相匹配的第一交互信息用于向第二虚拟对象发出询问请求。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,确定与第一情绪相匹配的文本信息;
S2,确定与第一情绪相匹配的图像信息;
S3,确定与第一情绪相匹配的音频信息。
可选地,存储介质还被设置为存储用于执行上述实施例中的方法中所包括的步骤的计算机程序,本实施例中对此不再赘述。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
根据本申请实施例的又一个方面,还提供了一种用于实施上述信息交互方法的电子装置,如图15所示,该电子装置包括处理器1502、存储器1504、传输装置1506与显示器1508,该存储器1504中存储有计算机程序,该处理器被设置为通过计算机程序执行上述任一项方法实施例中的步骤。传输装置1506用于传输采集到的面部画面与语音信息等,显示器1508用于显示第一交互信息等。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器1502可以被设置为通过计算机程序执行以下步骤:
S1,提取目标对象的生物特征,其中,目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;
S2,根据提取出的生物特征识别出目标对象当前的第一情绪;
S3,确定与第一情绪相匹配的待交互的第一交互信息;
S4,向第二虚拟对象所在第二客户端发送第一交互信息,其中,第二虚拟对象与第一虚拟对象共同执行虚拟任务。
可选地,本领域普通技术人员可以理解,图15所示的结构仅为示意,电子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,简称为MID)、PAD等终端设备。图15其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图15中所示更多或者更少的组件(如网络接口等),或者具有与图15所示不同的配置。
其中,存储器1504可用于存储软件程序以及模块,如本申请实施例中的信息交互方法对应的程序指令/模块,处理器1502通过运行存储在存储器1504内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述信息交互方法。存储器1504可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1504可进一步包括相对于处理器1502远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置1506用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1506包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1506为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
其中,存储器1504用于存储第一交互信息与提取到的生物特征等信息。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软 件功能单元的形式实现。
以上所述仅是本申请的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
在本申请实施例中,通过提取目标对象的生物特征;根据提取出的所述生物特征识别出所述目标对象当前的第一情绪;确定与所述第一情绪相匹配的待交互的第一交互信息;向第二虚拟对象所在第二客户端发送所述第一交互信息的方法,从而可以根据目标对象的生物特征获取到待交互的第一交互消息,并将第一交互消息发送给第二客户端,避免了必须中断被应用客户端控制的控制对象所执行的应用任务才能完成与目标对象之间的信息交互的问题,从而可以在控制对象执行应用人物的过程中完成信息的交互,实现了降低交互操作复杂度的技术效果

Claims (24)

  1. 一种信息交互方法,包括:
    终端提取目标对象的生物特征,其中,所述目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;
    所述终端根据提取出的所述生物特征识别出所述目标对象当前的第一情绪;
    所述终端确定与所述第一情绪相匹配的待交互的第一交互信息;
    所述终端向第二虚拟对象所在第二客户端发送所述第一交互信息,其中,所述第二虚拟对象与所述第一虚拟对象共同执行所述虚拟任务。
  2. 根据权利要求1所述的方法,其中,所述终端向第二虚拟对象所在第二客户端发送所述第一交互信息包括:
    所述终端从所述虚拟任务中确定出所述第二虚拟对象,其中,所述第二虚拟对象与所述第一虚拟对象为相同阵营的虚拟对象;
    所述终端向所述第二虚拟对象所在的所述第二客户端发送所述第一交互信息。
  3. 根据权利要求2所述的方法,其中,所述终端从所述虚拟任务中确定出所述第二虚拟对象包括:
    所述终端从所述相同阵营的虚拟对象中获取全部虚拟对象作为所述第二虚拟对象;或者
    所述终端从所述相同阵营的虚拟对象中获取部分虚拟对象作为所述第二虚拟对象,其中,所述部分虚拟对象与所述第一虚拟对象具有关联关系。
  4. 根据权利要求1所述的方法,其中,在所述终端向第二虚拟对象所在第二客户端发送所述第一交互信息时,还包括:
    所述终端从所述虚拟任务中确定出第三虚拟对象,其中,所述第 三虚拟对象与所述第一虚拟对象为不同阵营的虚拟对象;
    所述终端向所述第三虚拟对象所在的第三客户端发送第二交互信息,其中,所述第二交互信息与第二情绪相匹配,所述第二情绪与所述第一情绪为不同情绪。
  5. 根据权利要求1所述的方法,其中,
    所述终端提取目标对象的生物特征包括:所述终端通过所述第一客户端所在终端中的图像采集设备采集所述目标对象的面部画面;从所述面部画面中提取出所述目标对象的面部特征;
    所述终端根据提取出的所述生物特征识别出所述目标对象当前的第一情绪包括:所述终端根据提取出的所述面部特征识别出所述目标对象的所述第一情绪。
  6. 根据权利要求5所述的方法,其中,所述终端根据提取出的所述面部特征识别出所述目标对象的所述第一情绪包括:
    所述终端查找与提取出的所述面部特征匹配的情绪标识;
    所述终端将查找到的所述情绪标识所表示的情绪作为所述第一情绪。
  7. 根据权利要求1所述的方法,其中,
    所述终端提取目标对象的生物特征包括:所述终端通过所述第一客户端所在终端中的声音采集设备采集所述目标对象的声音信号;从所述声音信号中提取出所述目标对象的声音特征;
    所述终端根据提取出的所述生物特征识别出所述目标对象当前的第一情绪包括:所述终端根据提取出的所述声音特征识别出所述目标对象的所述第一情绪。
  8. 根据权利要求7所述的方法,其中,所述终端根据提取出的所述声音特征识别出所述目标对象的所述第一情绪包括:
    所述终端获取预先配置的目标音频特征,其中,所述目标音频特 征用于触发所述第一交互信息;
    在所述声音特征与所述目标音频特征之间的相似度高于预定阈值的情况下,所述终端获取与所述目标音频特征对应的情绪标识;
    所述终端将所述情绪标识所表示的情绪作为所述第一情绪。
  9. 根据权利要求1所述的方法,其中,所述终端确定与所述目标对象当前的第一情绪相匹配的待交互的第一交互信息包括:
    所述终端获取所述第一情绪的情绪标识;
    所述终端查找与所述第一情绪的情绪标识相匹配的所述第一交互信息。
  10. 根据权利要求9所述的方法,其中,所述终端查找与所述第一情绪的情绪标识相匹配的所述第一交互信息包括:
    在所述情绪标识指示为第一情绪类型的情况下,所述终端获取与所述第一情绪类型相匹配的所述第一交互信息,其中,与所述第一情绪类型相匹配的所述第一交互信息用于请求对所述第一虚拟对象进行帮助;
    在所述情绪标识指示为第二情绪类型的情况下,所述终端获取与所述第二情绪类型相匹配的所述第一交互信息,其中,与所述第二情绪类型相匹配的所述第一交互信息用于对所述第二虚拟对象进行鼓励提示;
    在所述情绪标识指示为第三情绪类型的情况下,所述终端获取与所述第三情绪类型相匹配的所述第一交互信息,其中,与所述第三情绪类型相匹配的所述第一交互信息用于向所述第二虚拟对象发出询问请求。
  11. 根据权利要求1至10中任一项所述的方法,其中,所述终端确定与所述目标对象当前的第一情绪相匹配的待交互的第一交互信息包括以下至少之一:
    所述终端确定与所述第一情绪相匹配的文本信息;
    所述终端确定与所述第一情绪相匹配的图像信息;
    所述终端确定与所述第一情绪相匹配的音频信息。
  12. 一种信息交互装置,应用于终端中,包括:
    提取单元,设置为提取目标对象的生物特征,其中,所述目标对象通过第一客户端控制第一虚拟对象执行虚拟任务;
    识别单元,设置为根据提取出的所述生物特征识别出所述目标对象当前的第一情绪;
    确定单元,设置为确定与所述第一情绪相匹配的待交互的第一交互信息;
    发送单元,设置为向第二虚拟对象所在第二客户端发送所述第一交互信息,其中,所述第二虚拟对象与所述第一虚拟对象共同执行所述虚拟任务。
  13. 根据权利要求12所述的装置,其中,所述发送单元包括:
    第一确定模块,设置为从所述虚拟任务中确定出所述第二虚拟对象,其中,所述第二虚拟对象与所述第一虚拟对象为相同阵营的虚拟对象;
    第一发送模块,设置为向所述第二虚拟对象所在的所述第二客户端发送所述第一交互信息。
  14. 根据权利要求13所述的装置,其中,所述第一确定模块包括:
    第一获取子模块,设置为从所述相同阵营的虚拟对象中获取全部虚拟对象作为所述第二虚拟对象;或者
    第二获取子模块,设置为从所述相同阵营的虚拟对象中获取部分虚拟对象作为所述第二虚拟对象,其中,所述部分虚拟对象与所述第一虚拟对象具有关联关系。
  15. 根据权利要求12所述的装置,其中,所述发送单元还包括:
    第二确定模块,设置为从所述虚拟任务中确定出第三虚拟对象, 其中,所述第三虚拟对象与所述第一虚拟对象为不同阵营的虚拟对象;
    第二发送模块,设置为向所述第三虚拟对象所在的第三客户端发送第二交互信息,其中,所述第二交互信息与第二情绪相匹配,所述第二情绪与所述第一情绪为不同情绪。
  16. 根据权利要求12所述的装置,其中,
    所述提取单元包括:第一采集模块,设置为通过所述第一客户端所在终端中的图像采集设备采集所述目标对象的面部画面;第一提取模块,设置为从所述面部画面中提取出所述目标对象的面部特征;
    所述识别单元包括:识别模块,设置为根据提取出的所述面部特征识别出所述目标对象的所述第一情绪。
  17. 根据权利要求16所述的装置,其中,所述识别模块包括:
    第一查找子模块,设置为查找与提取出的所述面部特征匹配的情绪标识;
    第一确定子模块,设置为将查找到的所述情绪标识所表示的情绪作为所述第一情绪。
  18. 根据权利要求12所述的装置,其中,
    所述提取单元包括:第二采集模块,设置为通过所述第一客户端所在终端中的声音采集设备采集所述目标对象的声音信号;第二提取模块,设置为从所述声音信号中提取出所述目标对象的声音特征;
    所述识别单元包括:第二识别模块,设置为根据提取出的所述声音特征识别出所述目标对象的所述第一情绪。
  19. 根据权利要求18所述的装置,其中,所述第二识别模块包括:
    第三获取子模块,设置为获取预先配置的目标音频特征,其中,所述目标音频特征用于触发所述第一交互信息;
    第四获取子模块,设置为在所述声音特征与所述目标音频特征之间的相似度高于预定阈值的情况下,获取与所述目标音频特征对应的 情绪标识;
    第二确定子模块,设置为将所述情绪标识所表示的情绪作为所述第一情绪。
  20. 根据权利要求12所述的装置,其中,所述确定单元包括:
    获取模块,设置为获取所述第一情绪的情绪标识;
    查找模块,设置为查找与所述第一情绪的情绪标识相匹配的所述第一交互信息。
  21. 根据权利要求20所述的装置,其中,所述查找模块包括:
    第五获取子模块,设置为在所述情绪标识指示为第一情绪类型的情况下,获取与所述第一情绪类型相匹配的所述第一交互信息,其中,与所述第一情绪类型相匹配的所述第一交互信息用于请求对所述第一虚拟对象进行帮助;
    第六获取子模块,设置为在所述情绪标识指示为第二情绪类型的情况下,获取与所述第二情绪类型相匹配的所述第一交互信息,其中,与所述第二情绪类型相匹配的所述第一交互信息用于对所述第二虚拟对象进行鼓励提示;
    第七获取子模块,设置为在所述情绪标识指示为第三情绪类型的情况下,获取与所述第三情绪类型相匹配的所述第一交互信息,其中,与所述第三情绪类型相匹配的所述第一交互信息用于向所述第二虚拟对象发出询问请求。
  22. 根据权利要求12至21中任一项所述的装置,其中,所述确定单元包括以下至少之一:
    第三确定模块,设置为确定与所述第一情绪相匹配的文本信息;
    第四确定模块,设置为确定与所述第一情绪相匹配的图像信息;
    第五确定模块,设置为确定与所述第一情绪相匹配的音频信息。
  23. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至11任一项中所述的方 法。
  24. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至11任一项中所述的方法。
PCT/CN2018/119356 2018-02-11 2018-12-05 信息交互方法和装置、存储介质及电子装置 WO2019153860A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18905825.8A EP3751395A4 (en) 2018-02-11 2018-12-05 INFORMATION EXCHANGE PROCEDURE, DEVICE, STORAGE MEDIUM AND ELECTRONIC DEVICE
US16/884,877 US11353950B2 (en) 2018-02-11 2020-05-27 Information interaction method and device, storage medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810142618.X 2018-02-11
CN201810142618.XA CN108681390B (zh) 2018-02-11 2018-02-11 信息交互方法和装置、存储介质及电子装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/884,877 Continuation US11353950B2 (en) 2018-02-11 2020-05-27 Information interaction method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
WO2019153860A1 true WO2019153860A1 (zh) 2019-08-15

Family

ID=63800237

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119356 WO2019153860A1 (zh) 2018-02-11 2018-12-05 信息交互方法和装置、存储介质及电子装置

Country Status (4)

Country Link
US (1) US11353950B2 (zh)
EP (1) EP3751395A4 (zh)
CN (1) CN108681390B (zh)
WO (1) WO2019153860A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085420A (zh) * 2020-09-29 2020-12-15 中国银行股份有限公司 一种情绪级别确定方法、装置和设备

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681390B (zh) * 2018-02-11 2021-03-26 腾讯科技(深圳)有限公司 信息交互方法和装置、存储介质及电子装置
CN111176430B (zh) * 2018-11-13 2023-10-13 奇酷互联网络科技(深圳)有限公司 一种智能终端的交互方法、智能终端及存储介质
CN111355644B (zh) * 2020-02-19 2021-08-20 珠海格力电器股份有限公司 一种在不同空间之间进行信息交互的方法及系统
CN111401198B (zh) * 2020-03-10 2024-04-23 广东九联科技股份有限公司 观众情绪识别方法、装置及系统
CN111783728A (zh) * 2020-07-15 2020-10-16 网易(杭州)网络有限公司 信息交互方法、装置和终端设备
CN111803936B (zh) * 2020-07-16 2024-05-31 网易(杭州)网络有限公司 一种语音通信方法及装置、电子设备、存储介质
CN113050859B (zh) * 2021-04-19 2023-10-24 北京市商汤科技开发有限公司 交互对象的驱动方法、装置、设备以及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035558A (zh) * 2014-05-30 2014-09-10 小米科技有限责任公司 终端设备控制方法及装置
US20170177295A1 (en) * 2004-11-24 2017-06-22 Apple Inc. Music synchronization arrangement
CN108681390A (zh) * 2018-02-11 2018-10-19 腾讯科技(深圳)有限公司 信息交互方法和装置、存储介质及电子装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5349860B2 (ja) * 2008-08-07 2013-11-20 株式会社バンダイナムコゲームス プログラム、情報記憶媒体及びゲーム装置
CN103207662A (zh) * 2012-01-11 2013-07-17 联想(北京)有限公司 一种获得生理特征信息的方法及装置
CN103258556B (zh) * 2012-02-20 2016-10-05 联想(北京)有限公司 一种信息处理方法及装置
CN104866101B (zh) * 2015-05-27 2018-04-27 世优(北京)科技有限公司 虚拟对象的实时互动控制方法及装置
JP6263252B1 (ja) * 2016-12-06 2018-01-17 株式会社コロプラ 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177295A1 (en) * 2004-11-24 2017-06-22 Apple Inc. Music synchronization arrangement
CN104035558A (zh) * 2014-05-30 2014-09-10 小米科技有限责任公司 终端设备控制方法及装置
CN108681390A (zh) * 2018-02-11 2018-10-19 腾讯科技(深圳)有限公司 信息交互方法和装置、存储介质及电子装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3751395A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085420A (zh) * 2020-09-29 2020-12-15 中国银行股份有限公司 一种情绪级别确定方法、装置和设备

Also Published As

Publication number Publication date
CN108681390A (zh) 2018-10-19
US20200285306A1 (en) 2020-09-10
US11353950B2 (en) 2022-06-07
CN108681390B (zh) 2021-03-26
EP3751395A1 (en) 2020-12-16
EP3751395A4 (en) 2021-11-17

Similar Documents

Publication Publication Date Title
WO2019153860A1 (zh) 信息交互方法和装置、存储介质及电子装置
CN108234591B (zh) 基于身份验证装置的内容数据推荐方法、装置和存储介质
US9779527B2 (en) Method, terminal device and storage medium for processing image
KR102387495B1 (ko) 이미지 처리 방법 및 장치, 전자 기기 및 기억 매체
CN107632706B (zh) 多模态虚拟人的应用数据处理方法和系统
CN108108649B (zh) 身份验证方法及装置
WO2018076622A1 (zh) 图像处理方法、装置及终端
CN109086276B (zh) 数据翻译方法、装置、终端及存储介质
CN111240482B (zh) 一种特效展示方法及装置
WO2015024226A1 (zh) 通信方法、客户端和终端
WO2017217314A1 (ja) 応対装置、応対システム、応対方法、及び記録媒体
CN110555171A (zh) 一种信息处理方法、装置、存储介质及系统
CN113703585A (zh) 交互方法、装置、电子设备及存储介质
CN110910874A (zh) 一种互动课堂语音控制方法、终端设备、服务器和系统
CN112274909A (zh) 应用运行控制方法和装置、电子设备及存储介质
CN111080747B (zh) 一种人脸图像处理方法及电子设备
CN114567693B (zh) 视频生成方法、装置和电子设备
EP3200092A1 (en) Method and terminal for implementing image sequencing
CN109166164B (zh) 一种表情图片的生成方法及终端
CN110443238A (zh) 一种显示界面场景识别方法、终端及计算机可读存储介质
CN111666498B (zh) 一种基于互动信息的好友推荐方法、相关装置及存储介质
CN109510897B (zh) 一种表情图片管理方法及移动终端
CN108958690B (zh) 多屏互动方法、装置、终端设备、服务器及存储介质
CN108255389B (zh) 图像编辑方法、移动终端及计算机可读存储介质
CN116320721A (zh) 一种拍摄方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905825

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018905825

Country of ref document: EP