CN111013135A - Interaction method, device, medium and electronic equipment - Google Patents

Interaction method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111013135A
CN111013135A CN201911101967.8A CN201911101967A CN111013135A CN 111013135 A CN111013135 A CN 111013135A CN 201911101967 A CN201911101967 A CN 201911101967A CN 111013135 A CN111013135 A CN 111013135A
Authority
CN
China
Prior art keywords
interaction
virtual
role
area
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911101967.8A
Other languages
Chinese (zh)
Inventor
李云飞
张前川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911101967.8A priority Critical patent/CN111013135A/en
Publication of CN111013135A publication Critical patent/CN111013135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games

Abstract

The invention provides an interaction method, an interaction device, a medium and electronic equipment, wherein the interaction method comprises the following steps: displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to a first user account and a real head portrait interface of a first character; the second area is used for displaying a second virtual environment interface which has an interactive relation with the first user account, and the second virtual environment interface comprises a plurality of virtual second roles; receiving a trigger signal for starting interaction, and triggering the interaction to start according to the trigger signal; and judging whether the current condition meets the preset condition for finishing the interaction, and finishing the interaction under the condition that the current condition meets the preset condition. The method and the system can more intuitively see the whole interaction process in real time, thereby not only improving the interestingness of the interaction process, but also improving the accuracy of controlling the virtual character.

Description

Interaction method, device, medium and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to an interaction method, an interaction device, an interaction medium and electronic equipment.
Background
With the development of mobile terminal technology, smart phones and tablet computers are more and more widely used, and various touch screen-based small program applications, such as a multi-user online sports small program and a shooting small program, are installed on existing mobile terminals.
Especially, the multi-person online competition applet needs a plurality of touch keys of the touch screen to operate simultaneously or sequentially to control a virtual object in a virtual scene to execute a shooting operation, a squatting operation, an ammunition launching operation, a walking operation or a jumping operation; however, in the above interaction process, both hands of the user are often required to cooperate with each other, and most of the control modes are gesture control, so that when the mobile terminal used by the user, for example, the screen of the mobile phone is small, the above-mentioned complicated operation is executed, which is inconvenient to operate. In addition, the existing interaction mode is single, the roles cannot be matched, and the interaction pleasure is reduced.
Disclosure of Invention
The present invention is directed to an interaction method, apparatus, medium, and electronic device, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a specific implementation manner of the present invention, in a first aspect, the present invention provides an interaction method, including:
displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to a first character and a real avatar or model avatar interface of the first character; the second area is used for displaying a second virtual environment interface which has an interactive relation with the first role, the second virtual environment interface comprises a plurality of virtual second roles, and the virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time; the plurality of virtual second characters have a plurality of character attributes, the plurality of character attributes of the plurality of virtual second characters being different from each other;
receiving a trigger signal for starting interaction of the first role and/or the second role, and triggering the start of interaction according to the trigger signal;
and acquiring first display information of the first role and second display information of the second role, and finishing the interaction when the first display information and/or the second display information meet a preset condition for finishing the interaction.
According to a second aspect, the present invention provides an interaction apparatus, including:
the display unit is used for displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to a first role and a real head portrait or model head portrait interface of the first role; the second area is used for displaying a second virtual environment interface which has an interactive relation with the first role, the second virtual environment interface comprises a plurality of virtual second roles, and the virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time; the plurality of virtual second characters have a plurality of character attributes, the plurality of character attributes of the plurality of virtual second characters being different from each other;
the receiving unit is used for receiving a trigger signal for starting interaction of the first role and/or the second role and starting interaction according to the trigger signal;
and the control unit is used for acquiring the first display information of the first role and the second display information of the second role, and finishing the interaction when the first display information and/or the second display information meet the preset condition for finishing the interaction.
According to a third aspect, the invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the interaction method according to any one of the preceding claims.
According to a fourth aspect of the present invention, there is provided an electronic apparatus including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement an interaction method as claimed in any preceding claim.
Compared with the prior art, the scheme of the embodiment of the invention at least has the following beneficial effects: the invention provides an interaction method, an interaction device, a medium and electronic equipment, wherein a first virtual environment interface and a real head portrait interface of a first user are arranged in a first area of a display screen of a client, and a second virtual environment interface and a plurality of second characters are arranged in a second area of the display screen of the client, so that user expressions obtained through a user camera can be interacted, two interacting parties can more intuitively see the whole interaction process in real time, the characters in the second area can be matched according to the characteristics of the characters, the interest of the interaction process is improved, and the accuracy of controlling the virtual characters is also improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 illustrates an application scenario diagram of an interaction method according to an embodiment of the present invention;
FIG. 2 shows a flow diagram of an interaction method according to an embodiment of the invention;
FIG. 3 is a diagram illustrating interaction between two interacting parties according to an embodiment of the present invention;
FIG. 4 shows a schematic diagram of an interaction means according to an embodiment of the invention;
fig. 5 shows a schematic diagram of an electronic device connection structure according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present invention, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in the article or device in which the element is included.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an application scenario diagram according to an embodiment of the present invention is an application scenario in which a current user operates a client installed on a terminal device such as a mobile phone through the terminal device, the client includes a first client and a plurality of second clients, a first user account is logged in the first client, a plurality of second user accounts are logged in the plurality of second clients, and the first client and the plurality of second clients are both in data communication with a background server through a network. A specific application scenario is an application scenario in which interaction is performed between a first user account of a first client and second user accounts of multiple second clients, but is not limited to the only application scenario, and any scenario that can be applied to this embodiment is included.
As shown in fig. 2, according to a specific embodiment of the present invention, the present invention provides an interaction method applied in a client, where a first user account is logged in the client, and the method includes the following steps: the following examples of images are not intended to limit the interaction process of the present application, but are merely provided as examples for understanding the scheme.
Step S202: displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to the first user account and a real avatar or model avatar interface of the first character; the second area is used for displaying a second virtual environment interface having an interactive relationship with the first user account, the second virtual environment interface includes a plurality of virtual second roles, a plurality of people can be arranged in a certain form, for example, one way of horizontal arrangement or two ways of horizontal arrangement, each person can move freely, and for convenience of control, the second area can be set to be front-back and left-right.
As shown in fig. 3, the interactive interface is a visual interface with both interactive parties displayed after entering the interface through the mobile terminal. For example, after logging in the applet client, the interaction scene presented by the applet role is entered.
The interactive interface comprises a first area and a second area, the first area and the second area can be arranged up and down or left and right, and the interactive interface can be switched to a horizontal screen mode by using a gravity sensor of the terminal. As a preferred implementation mode, the first area is arranged at the upper part of the second area, so that two interactive parties can more intuitively see the whole interactive process in real time, and a first user can conveniently launch different types of weaponry in a preset direction by controlling the five sense organs to act, so that the aim of accurately controlling the corresponding virtual character to execute different target actions is fulfilled, both hands are liberated, and the interest of the interactive process is improved.
The first area comprises a first virtual environment interface and a real head portrait interface, wherein the first virtual environment interface is used for displaying a first role corresponding to the first user account, first virtual information of the first role is displayed on the first virtual environment interface of the first area, and the first virtual information comprises first display information, a first experience value, a first identification value or attack information. To facilitate understanding of the present embodiment, as an example of an avatar, the first display information may be information representing a first character life value, such as a blood bar progress (which may be displayed in the form of a numerical value on a progress bar, such as 10000), the first experience value is a virtual message identified in numerical or other form that is awarded to the user for the purpose of awarding the user's applet participation time or the like, e.g., an empirical value of 500, the first identification value may be a virtual information obtained by the number of times the user wins the game, such as a prize represented by a gold or a small coin, the reward can be used for purchasing weapons, equipment, skills, life values and the like, attack information represents various attack identifications with injury attributes which can be used by a user in an interaction process, and different types of attack identifications can be presented on an interaction interface in a defining, setting and rendering mode according to the requirements of different programs. For the anti-applet interaction types, alternative offensive signatures include, but are not limited to, laser columns, bullets, projectiles, fireballs, and the like, each model having its own characteristics, such as injury characteristic values represented as data, velocity characteristic values represented as data, distance characteristic values represented as data, and the like. For example, the injury characteristic value of a fireball is 10000, the velocity characteristic value is 100, and the distance characteristic value is 20; the attack injury characteristic value of the cannonball is 8000, the speed characteristic value is 200 and the distance characteristic value is 30; the bullet has an injury characteristic of 1000, a velocity characteristic of 300, a distance characteristic of 40, and so on. Different types of offensive marks need to be rendered, so that the countermeasure effect is better, different weapons can be rendered with different effects in a rendering mode according to the appearance characteristic of authenticity, and the rendering method is a known method and is not described herein any more. Specific effects may be described as, for example, opening the mouth, releasing a fireball from the mouth, beeping the mouth to release a shell, skimming the mouth to release a bullet, etc. The above description of an attacking weapon is not to be taken as being exclusive. The characters in the first area can be selectively interacted, for example, the characters with lower life value in the second area are selected to attack, so that the number of people in the second characters can be rapidly reduced.
The second area is used for displaying a second virtual environment interface having an interactive relationship with the first user account, and is used for displaying virtual information of a plurality of second characters (A, B, C).
The plurality of virtual second personas includes: the plurality of virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time.
That is, the plurality of second characters can be automatically controlled by the computer program, and the actions, skills and coordination among the characters are compiled by the computer program and are not controlled by people in the later interaction process. In another embodiment, the plurality of second roles are roles controlled by receiving control instructions of other users in real time, for example, other users may control a character of a small person in a touch manner, and the character of the small person may be selected when entering the small program.
Optionally, the second role has role attributes, where the role attributes include a speed attribute, a bounce attribute, an attack attribute, a protection attribute, and a life attribute, and the role attributes of the plurality of virtual second roles are different from each other. The speed attribute is the moving speed value of each character of the little person, and the background server is identified by numerical values, such as the speed of A100, the speed of B80 and the speed of C60; the bounce attribute is the bounce value of each character, and the background server is identified by numerical values, such as the bounce value of A100, the bounce value of B80 and the bounce value of C60; the attack attribute is an attack value of each character of the little person, and the background server carries out identification through numerical values, such as an attack value 100 of A, an attack value 80 of B and an attack value 60 of C; a protection attribute indicates that a certain role has the capability attribute of protecting other roles, for example, other children can be reduced by 20% after the attribute is turned on if the other children are harmed similarly; the life attribute is the life value of each persona, and the backend server is identified by numerical values, such as life value 100 for a, 80 for B, and 60 for C.
When a user enters the interactive interface through the second role, a certain small role can be selected, and each small role has different skills, such as the fact that some small roles have high blood growth and strong aggressivity and are flexible to bounce; some of the small people have short blood and weak attack for bouncing flexibility; some may add blood to others, etc. The small offensive weapon may be singular or diverse, and for ease of interaction, it is preferable that each small character has a single offensive weapon.
Displaying second virtual information of the second role on the second virtual environment interface of the second area, wherein the second virtual information includes second display information, a second experience value, a second identification value, a role attribute or attack information. The display information, the experience value, the identification value, the role attribute or the attack information are as described above, and are not described herein again.
Optionally, the displaying, in the first area, a first virtual environment interface and a real avatar interface of the first character corresponding to the first user account includes:
displaying the expressive actions of the first user account on the real head portrait interface of the first character, wherein the expressive actions at least comprise one of the following actions: head movements, mouth movements or eye movements;
wherein the head action comprises at least one of; nodding the head, shaking the head, twisting the head of the first character to the left side, and twisting the head of the first character to the right side;
the mouth movements of the first character include at least one of: duzui, Zhangzui, and left-falling mouth;
the eye movements of the first character include at least one of: blinking, eyebrow picking, to look to the left side of the first virtual environment interface, to look to the right side of the first virtual environment interface.
As shown in fig. 3, after the expression image of the real avatar or the model avatar of the user is acquired, the change process of the expression image needs to be monitored in real time in a software and/or hardware manner, and whether there is any motion in the head, eyes and mouth region is monitored.
Wherein the monitoring of the head region is by monitoring whether the position of the ears and/or nose changes; monitoring the eye region by monitoring whether the positions of eyebrows, eyelids and pupils are changed; the mouth region is monitored by monitoring whether the position of the upper lip, lower lip, and chin changes.
Specifically, the method may include the following sub-steps:
1. the expression image is divided into a head region, an eye region, and a mouth region, as shown in fig. 3.
Optionally, the head region, the eye region and the mouth region include a plurality of feature regions, including: the characteristic region of the head region includes: ears and nose; the characteristic region of the mouth region includes: upper lip, lower lip and chin; the characteristic region of the eye region includes: eyebrows, eyelids, and pupils.
2. At least one feature point is selected in each of the feature regions.
Feature points, which are randomly defined for analyzing the movement of each region of the face, may be selected from one or more feature points, as shown in fig. 3, for example, the feature points selected in the head region may be the feature point a1 selected on any one or two ears, and the feature point a2 selected on the tip of the nose; the feature points selected in the mouth region may be feature point b1 selected on the upper lip, feature b2 selected on the lower lip, and feature point b3 selected on the chin; the feature points selected in the eye region may be feature point c1 selected on any eyebrow, feature c2 selected on any eyelid, and feature point c3 selected on any pupil. The feature points are selected arbitrarily, but the positions with obvious action amplitude of the feature areas, such as the center of lips and the bottommost position of chin, are preferably selected, so that the motion vectors of the feature points can be accurately recorded.
3. And monitoring the motion characteristics of the expression image through the position change of the at least one feature point.
Step S204: and receiving a trigger signal for starting interaction, and triggering the interaction to be started according to the trigger signal.
In this step, after receiving the trigger signal for starting the interaction, the start of the interaction may be triggered according to the trigger signal.
In order to improve the experience and interest of the user, the matching condition of the first client may be configured in advance, for example, the tactical level of the first virtual character of the current first client is 10, the first virtual character has a high-level tactical, and the condition matching with the current first client is set as: at least the legal level of the virtual character to be selected of the client side to be selected is 10 or more than 10, so that the interestingness of the interaction process is improved.
Optionally, the receiving a trigger signal for starting interaction and triggering the start of interaction according to the trigger signal includes: receiving an expression action of the first user account, and controlling the first character to execute a target action according to the expression action; wherein the target action comprises at least one of: performing a target action of firing a fireball, performing a target action of shooting, performing a target action of firing a laser beam, performing a target action of firing a bullet, performing a target action of firing a shell.
Step S206: and acquiring first display information of the first role and second display information of the second role, and finishing the interaction when the first display information and/or the second display information meet a preset condition for finishing the interaction.
Wherein the preset conditions include: first display information of a first role corresponding to the first user account is zero; alternatively, the second display information of the plurality of virtual second characters is all zero. The above steps show that when the life value of either or both of the two interactive parties is zero, the interaction is finished. Since the first character has only one person, as long as the life value of the first character is zero, the applet is over and the second character wins. The second role has multiple people, and the applet is finished only when all the life values are zero.
Optionally, before displaying the interactive interface for interaction, the method further includes the following manner of entering the interactive interface:
the first embodiment,
1. And receiving the eye position information of the user through the terminal camera, and reflecting the eye position information to a terminal screen.
2. And judging whether the eye position information is effective information, for example, whether the eye position information is in the upper half area of a terminal screen, and two eyes are positioned in a display screen, if so, entering an interactive interface.
The second embodiment,
1. Sending an opening request to a server so that the server can select a plurality of user accounts which interact with the first user account from a plurality of user accounts to be selected, wherein the opening request carries the first user account;
2. receiving user account information of a user account which is sent by the server and interacts with the first user account;
3. and receiving a starting instruction and entering an interactive interface.
According to the method, the first virtual environment interface and the real head portrait interface of the first user are arranged in the first area of the display screen of the client, the second virtual environment interface and the plurality of second characters are arranged in the second area of the display screen of the client, so that the user expression obtained through the user camera can be interacted, both interactive parties can more intuitively see the whole interaction process in real time, the interestingness of the interaction process is improved, and the accuracy of controlling the virtual characters is also improved.
Example 2
As shown in fig. 1, the application scenario is an application scenario diagram in which a current user operates a client installed on a terminal device such as a mobile phone through the terminal device, the client includes a first client and a second client, a first user account is logged in the first client, a second user account is logged in the second client, and both the first client and the second client perform data communication with a background server through a network. A specific application scenario is an application scenario in which an interaction is performed between a first user account of a first client and a second user account of a second client, but is not limited to the only application scenario, and any scenario that can be applied to this embodiment is included. The embodiment is similar to embodiment 1 in the explanation of the method steps for implementing the method steps as described in embodiment 1 based on the same names and meanings, and has the same technical effects as embodiment 1, and thus the description thereof is omitted.
As shown in fig. 4, according to an embodiment of the present invention, the present invention provides an interaction apparatus, which is applied to a client, where a first user account is logged in, and the interaction apparatus includes a display unit 402, a receiving unit 404, and a control unit 406:
a display unit 402, configured to display an interactive interface for interaction, where the interactive interface includes a first area and a second area, and the first area is used to display a first virtual environment interface and a first-character real avatar interface corresponding to the first user account; the second area is used for displaying a second virtual environment interface having an interactive relationship with the first user account, and the second virtual environment interface comprises a plurality of virtual second characters.
Optionally, the plurality of virtual second roles includes: the plurality of virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time.
Optionally, the second role has role attributes, where the role attributes include a speed attribute, a bounce attribute, an attack attribute, a protection attribute, and a life attribute, and the role attributes of the plurality of virtual second roles are different from each other.
Optionally, the displaying, in the first area, a first virtual environment interface and a real avatar interface of the first character corresponding to the first user account includes:
displaying the expressive actions of the first user account on the real head portrait interface of the first character, wherein the expressive actions at least comprise one of the following actions: head movements, mouth movements or eye movements;
wherein the head action comprises at least one of; nodding the head, shaking the head, twisting the head of the first character to the left side, and twisting the head of the first character to the right side;
the mouth movements of the first character include at least one of: duzui, Zhangzui, and left-falling mouth;
the eye movements of the first character include at least one of: blinking, eyebrow picking, to look to the left side of the first virtual environment interface, to look to the right side of the first virtual environment interface.
The receiving unit 404 is configured to receive a trigger signal for starting interaction, and trigger the starting of interaction according to the trigger signal.
Optionally, the receiving a trigger signal for starting interaction and triggering the start of interaction according to the trigger signal includes: receiving an expression action of the first user account, and controlling the first character to execute a target action according to the expression action; wherein the target action comprises at least one of: performing a target action of firing a fireball, performing a target action of shooting, performing a target action of firing a laser beam, performing a target action of firing a bullet, performing a target action of firing a shell.
The control unit 406 is configured to determine whether the current condition meets a preset condition for ending the interaction, and end the interaction when the current condition meets the preset condition.
Optionally, a reading unit (not shown) is further included: for reading the preset condition.
Wherein the preset conditions include: first display information of a first role corresponding to the first user account is zero; or the second display information of the plurality of virtual second characters is zero; or the first display information of the first role corresponding to the first user account is zero, and the second display information of the plurality of virtual second roles is zero.
Optionally, an entry unit (not shown) is further included:
1. sending an opening request to a server so that the server can select a plurality of user accounts which interact with the first user account from a plurality of user accounts to be selected, wherein the opening request carries the first user account;
2. receiving user account information of a user account which is sent by the server and interacts with the first user account;
3. and receiving a starting instruction and entering an interactive interface.
Optionally, the apparatus further includes a plurality of second characters, which can communicate with each other, for example, by means of voice or text interaction, so as to realize more tacit cooperation between the characters.
According to the device, the first virtual environment interface and the real head portrait interface of the first user are arranged in the first area of the display screen of the client, the second virtual environment interface and the plurality of second roles are arranged in the second area of the display screen of the client, so that the expression of the user acquired through the user camera can be interacted, both interactive parties can view the whole interactive process more intuitively and in real time, the interestingness of the interactive process is improved, and the accuracy of controlling the virtual roles is also improved.
Example 3
As shown in fig. 5, the present embodiment provides an electronic device, where the electronic device is used for an interaction method, and the electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to: the first virtual environment interface and the real head portrait interface of the first user are arranged on the first screen of the display screen of the client side, and the second virtual environment interface and the real head portrait interface of the second user are arranged on the second screen of the display screen of the client side, so that the interface corresponding to the first user account and the interface corresponding to the second user account are displayed in a split screen mode.
Example 4
The disclosed embodiments provide a non-volatile computer storage medium having stored thereon computer-executable instructions that can perform the interaction method of any of the above method embodiments.
Example 5
Referring now to FIG. 5, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the first virtual environment interface and the real head portrait interface of the first user are arranged on the first screen of the display screen of the client side, and the second virtual environment interface and the real head portrait interface of the second user are arranged on the second screen of the display screen of the client side, so that the interface corresponding to the first user account and the interface corresponding to the second user account are displayed in a split screen mode.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the first virtual environment interface and the real head portrait interface of the first user are arranged on the first screen of the display screen of the client side, and the second virtual environment interface and the real head portrait interface of the second user are arranged on the second screen of the display screen of the client side, so that the interface corresponding to the first user account and the interface corresponding to the second user account are displayed in a split screen mode.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.

Claims (10)

1. An interaction method, comprising:
displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to a first character and a real avatar or model avatar interface of the first character; the second area is used for displaying a second virtual environment interface which has an interactive relation with the first role, the second virtual environment interface comprises a plurality of virtual second roles, and the virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time; the plurality of virtual second characters have a plurality of character attributes, the plurality of character attributes of the plurality of virtual second characters being different from each other;
receiving a trigger signal for starting interaction of the first role and/or the second role, and triggering the start of interaction according to the trigger signal;
and acquiring first display information of the first role and second display information of the second role, and finishing the interaction when the first display information and/or the second display information meet a preset condition for finishing the interaction.
2. The method of claim 1, wherein the character attributes comprise a velocity attribute, a bounce attribute, an attack attribute, and/or a life attribute, and wherein each character attribute is characterized by a numerical value.
3. The method of claim 2, wherein the plurality of character attributes of the plurality of virtual second characters are distinct from each other, comprising:
if the speed attribute value of the second role is high, the bounce attribute value is high, and the attack attribute and/or life attribute value is low; or, if the life attribute value of the second character is high, the velocity attribute value is low, the bounce attribute value is low, and/or the attack attribute value is low.
4. The method of claim 3, further comprising:
and voice or text communication can be carried out between the second roles so as to determine that the second role with the highest attribute value is in the best position.
5. The method of claim 4, wherein displaying the interactive interface for interaction further comprises:
displaying first virtual information of the first character on the first virtual environment interface of the first area, wherein the first virtual information comprises first display information, a first experience value, a first identification value or attack information, and the first virtual information dynamically changes; and/or
Displaying second virtual information of the plurality of second characters on the second virtual environment interface of the second area, wherein the second virtual information includes second display information, a second experience value, a second identification value, character attributes or attack information, and the second virtual information dynamically changes.
6. The method of claim 1, wherein the first area is used to display a model avatar interface of the first character, comprising:
acquiring a real head image of a first character through a camera;
obtaining a model head portrait matched with the real head image through a selected area;
and displaying the selected model avatar in the first area, wherein the model avatar is controlled by the real avatar.
7. The method according to claim 1, wherein the acquiring first display information of the first character and second display information of the second character, and ending the interaction when the first display information and/or the second display information satisfy a preset condition for ending the interaction comprises:
reading first display information of the first role and second display information of the second role in real time;
and when the first display information of the first role is zero and/or the second display information of the plurality of virtual second roles is all zero, ending the interaction.
8. An interaction device, comprising:
the display unit is used for displaying an interactive interface for interaction, wherein the interactive interface comprises a first area and a second area, and the first area is used for displaying a first virtual environment interface corresponding to a first role and a real head portrait or model head portrait interface of the first role; the second area is used for displaying a second virtual environment interface which has an interactive relation with the first role, the second virtual environment interface comprises a plurality of virtual second roles, and the virtual second roles are virtual roles which automatically receive computer control instructions for interaction or virtual roles which receive other user control instructions for interaction in real time; the plurality of virtual second characters have a plurality of character attributes, the plurality of character attributes of the plurality of virtual second characters being different from each other;
the receiving unit is used for receiving a trigger signal for starting interaction of the first role and/or the second role and starting interaction according to the trigger signal;
and the control unit is used for acquiring the first display information of the first role and the second display information of the second role, and finishing the interaction when the first display information and/or the second display information meet the preset condition for finishing the interaction.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 7.
CN201911101967.8A 2019-11-12 2019-11-12 Interaction method, device, medium and electronic equipment Pending CN111013135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911101967.8A CN111013135A (en) 2019-11-12 2019-11-12 Interaction method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911101967.8A CN111013135A (en) 2019-11-12 2019-11-12 Interaction method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111013135A true CN111013135A (en) 2020-04-17

Family

ID=70201331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911101967.8A Pending CN111013135A (en) 2019-11-12 2019-11-12 Interaction method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111013135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992947A (en) * 2019-11-12 2020-04-10 北京字节跳动网络技术有限公司 Voice-based interaction method, device, medium and electronic equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN105307737A (en) * 2013-06-14 2016-02-03 洲际大品牌有限责任公司 Interactive video games
CN105335064A (en) * 2015-09-29 2016-02-17 腾讯科技(深圳)有限公司 Information processing method, terminal, and computer storage medium
US9789403B1 (en) * 2016-06-14 2017-10-17 Odile Aimee Furment System for interactive image based game
CN107592575A (en) * 2017-09-08 2018-01-16 广州华多网络科技有限公司 A kind of live broadcasting method, device, system and electronic equipment
CN107913521A (en) * 2017-11-09 2018-04-17 腾讯科技(深圳)有限公司 The display methods and device of virtual environment picture
CN108829247A (en) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 Exchange method and device based on eye tracking, computer equipment
CN108905193A (en) * 2018-07-03 2018-11-30 百度在线网络技术(北京)有限公司 Game manipulates processing method, equipment and storage medium
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN109568937A (en) * 2018-10-31 2019-04-05 北京市商汤科技开发有限公司 Game control method and device, game terminal and storage medium
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN105307737A (en) * 2013-06-14 2016-02-03 洲际大品牌有限责任公司 Interactive video games
CN105335064A (en) * 2015-09-29 2016-02-17 腾讯科技(深圳)有限公司 Information processing method, terminal, and computer storage medium
US9789403B1 (en) * 2016-06-14 2017-10-17 Odile Aimee Furment System for interactive image based game
CN107592575A (en) * 2017-09-08 2018-01-16 广州华多网络科技有限公司 A kind of live broadcasting method, device, system and electronic equipment
CN107913521A (en) * 2017-11-09 2018-04-17 腾讯科技(深圳)有限公司 The display methods and device of virtual environment picture
CN108829247A (en) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 Exchange method and device based on eye tracking, computer equipment
CN108905193A (en) * 2018-07-03 2018-11-30 百度在线网络技术(北京)有限公司 Game manipulates processing method, equipment and storage medium
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN109224437A (en) * 2018-08-28 2019-01-18 腾讯科技(深圳)有限公司 The exchange method and terminal and storage medium of a kind of application scenarios
CN109568937A (en) * 2018-10-31 2019-04-05 北京市商汤科技开发有限公司 Game control method and device, game terminal and storage medium
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992947A (en) * 2019-11-12 2020-04-10 北京字节跳动网络技术有限公司 Voice-based interaction method, device, medium and electronic equipment
CN110992947B (en) * 2019-11-12 2022-04-22 北京字节跳动网络技术有限公司 Voice-based interaction method, device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
KR102506504B1 (en) Voice assistant system using artificial intelligence
US20210295099A1 (en) Model training method and apparatus, storage medium, and device
US10981052B2 (en) Game processing system, method of processing game, and storage medium storing program for processing game
KR102606017B1 (en) Contextually aware communications system in video games
CN109529356B (en) Battle result determining method, device and storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
US10139901B2 (en) Virtual reality distraction monitor
CN111672099B (en) Information display method, device, equipment and storage medium in virtual scene
US20210081985A1 (en) Advertisement interaction methods and apparatuses, electronic devices and storage media
CN110917630B (en) Enhanced item discovery and delivery for electronic video game systems
CN110992947B (en) Voice-based interaction method, device, medium and electronic equipment
CN112569607B (en) Display method, device, equipment and medium for pre-purchased prop
CN111589144B (en) Virtual character control method, device, equipment and medium
CN114130012A (en) User interface display method, device, equipment, medium and program product
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111013135A (en) Interaction method, device, medium and electronic equipment
CN112995687A (en) Interaction method, device, equipment and medium based on Internet
CN111013139B (en) Role interaction method, system, medium and electronic equipment
CN110928410A (en) Interaction method, device, medium and electronic equipment based on multiple expression actions
US20220062773A1 (en) User input method and apparatus
CN114210051A (en) Carrier control method, device, equipment and storage medium in virtual scene
CN110882537B (en) Interaction method, device, medium and electronic equipment
JP2022000218A (en) Program, method, information processing device, and system
CN111068308A (en) Data processing method, device, medium and electronic equipment based on mouth movement
CN114025854A (en) Program, method, and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination