CN106775243A - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN106775243A
CN106775243A CN201611168562.2A CN201611168562A CN106775243A CN 106775243 A CN106775243 A CN 106775243A CN 201611168562 A CN201611168562 A CN 201611168562A CN 106775243 A CN106775243 A CN 106775243A
Authority
CN
China
Prior art keywords
virtual object
electronic device
animation
output
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611168562.2A
Other languages
Chinese (zh)
Other versions
CN106775243B (en
Inventor
陈军宏
张培养
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co Ltd
Original Assignee
XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd filed Critical XIAMEN HUANSHI NETWORK TECHNOLOGY Co Ltd
Priority to CN201611168562.2A priority Critical patent/CN106775243B/en
Publication of CN106775243A publication Critical patent/CN106775243A/en
Application granted granted Critical
Publication of CN106775243B publication Critical patent/CN106775243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a kind of information processing method and electronic equipment, first electronic equipment is responded according to the multiple operational orders for the first virtual objects, so that the first virtual objects export continuous animation in the first electronic equipment side, and the animation data of the first virtual objects is synchronized to the second electronic equipment, the user of the second electronic equipment is allowd while seeing the animation of the first virtual objects output, is enhanced interactive between user.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information processing method and an electronic device.
Background
With the rapid development of science and technology, information interaction through electronic equipment is increasingly becoming a realization mode of human interaction. However, currently, information interaction through electronic devices is limited to interaction through voice, text or pre-stored animation, so that interactivity between users is poor.
Therefore, how to enhance the interactivity between users becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide an information processing method and electronic equipment to enhance the interactivity among users.
In order to achieve the purpose, the invention provides the following technical scheme:
an information processing method is applied to a first electronic device, and comprises the following steps:
receiving at least two operation instructions aiming at a first virtual object;
determining a response mode corresponding to each operation instruction;
controlling the first virtual object to output animation according to the determined response modes;
and synchronizing the animation data of the first virtual object to a second electronic device.
In the above method, preferably, the determining a response mode corresponding to each operation instruction includes:
and according to the receiving sequence of the operation instructions, sequentially determining a response mode corresponding to each operation instruction.
In the above method, preferably, the controlling the first virtual object to output an animation according to the determined response mode includes:
controlling the first virtual object to output the sub-animation according to the determined response mode every time one response mode is determined;
or,
and after the response mode is determined to be finished, controlling the first virtual object to sequentially output the sub-animations according to each determined response mode.
In the method, preferably, the at least two operation instructions are triggered by an operation body on a touch screen of the first electronic device; after receiving at least two operation instructions for the first virtual object, before controlling the first virtual object to output an animation according to the determined response modes, the method further comprises:
determining an operation area of the operation body on the touch screen;
the controlling the first virtual object to output the animation according to the determined response modes comprises:
and controlling the first virtual object to output animation in the operation area according to the determined response modes.
In the method, preferably, the first virtual object is a virtual object stored locally in the first electronic device, or the first virtual object is a virtual object received by the first electronic device and sent by the second electronic device.
In the above method, preferably, the determining a response mode corresponding to each operation instruction includes:
determining a virtual scene in which the first virtual object is located;
and determining a response mode corresponding to each operation instruction in the virtual scene.
The above method, preferably, further comprises:
monitoring a duration of receiving at least two operational instructions for the first virtual object;
when the duration is longer than a preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation;
if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object;
and if the user selects not to continue the operation, prohibiting receiving the operation instruction aiming at the first virtual object.
A first electronic device, comprising:
a communication component, a display screen and a processor; wherein,
the communication component is used for communicating with a second electronic device;
the processor is used for receiving at least two operation instructions aiming at the first virtual object; determining a response mode corresponding to each operation instruction; controlling the first virtual object to output animation according to the determined response modes; and synchronizing the animation data of the first virtual object to a second electronic device.
Preferably, in the first electronic device, the determining, by the processor, a response mode corresponding to each operation instruction includes:
and the processor sequentially determines a response mode corresponding to each operation instruction according to the receiving sequence of the operation instructions.
In the first electronic device, preferably, the display screen is a touch screen, and the at least two operation instructions are triggered by an operation body on the touch screen;
the processor is further used for determining an operation area of the operation body on the touch screen after receiving at least two operation instructions for the second virtual object and before controlling the first virtual object to output animation according to the determined response modes;
the processor controlling the first virtual object to output the animation according to the determined response modes comprises the following steps:
and the processor controls the first virtual object to output animation in the operation area according to the determined response modes.
Preferably, in the first electronic device, the determining, by the processor, a response mode corresponding to each operation instruction includes:
the processor determining a virtual scene in which the first virtual object is located; and determining a response mode corresponding to each operation instruction in the virtual scene.
The first electronic device is preferably configured to monitor a duration of receiving at least two operation instructions for the first virtual object; when the duration is longer than a preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation; if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object; and if the user selects not to continue the operation, prohibiting receiving the operation instruction aiming at the first virtual object.
According to the information processing method and the electronic device, the first electronic device responds according to the plurality of operation instructions for the first virtual object, so that the first virtual object outputs continuous animations on the first electronic device side, animation data of the first virtual object is synchronized to the second electronic device, a user of the second electronic device can see the animations output by the first virtual object at the same time, and interactivity among users is enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a flowchart of an implementation of an information processing method according to an embodiment of the present invention;
fig. 1b is an exemplary diagram of an operation body performing at least two operations to trigger at least two operation commands according to an embodiment of the present invention;
FIG. 1c is a diagram illustrating an exemplary embodiment of an operation body performing at least two operations to trigger at least two operation commands;
fig. 1d is a diagram illustrating an exemplary operation body performing at least two operations to trigger at least two operation commands according to an embodiment of the present invention;
fig. 2 is a flowchart of another implementation of an information processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart of another implementation of an information processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another implementation of the information processing method according to the embodiment of the present invention;
fig. 5 is a flowchart of a fifth implementation of an information processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a first electronic device according to an embodiment of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The information processing method provided by the embodiment of the invention can be applied to first electronic equipment, the first electronic equipment can display the virtual object, and the first electronic equipment can also communicate with second electronic equipment.
Referring to fig. 1a, fig. 1a is a flowchart of an implementation of an information processing method according to an embodiment of the present invention, which may include:
step S11: receiving at least two operation instructions aiming at a first virtual object;
the first virtual object may be a virtual object stored locally by the first electronic device.
Alternatively, the first virtual object may be a virtual object transmitted to the first electronic device by the second electronic device. For example, the first electronic device may receive the virtual object sent by the second electronic device through the instant messaging software.
The at least two operation instructions are triggered and generated by the user through the operation of the first virtual object through the operation body. Different operation triggers generate different operation instructions. The at least two operation commands may be at least partially the same operation command, or may be a plurality of completely different operation commands. The operation body may be an input device such as a keyboard, a mouse, a touch pen, or the like, or may be a part of the body of the user such as a finger of the user.
In at least two operations for triggering the at least two operation instructions, a time interval of execution times of two adjacent operations may be less than a preset time length, so that the at least two operation instructions have a certain continuity. That is, the at least two operation commands may be operation commands triggered by at least two discontinuous operations, or may be operation commands triggered by at least two continuous operations, or some of the at least two operation commands may be generated by discontinuous operation triggers and some of the at least two operation commands may be generated by continuous operation triggers. FIG. 1b is a diagram illustrating an example of triggering at least two operation commands for an operator to perform at least two operations; in fig. 1b, the user performs two discontinuous operations, one is to slide the operation body from left to right, and the other is to slide the operation body from bottom to top, and then the two discontinuous operations sequentially generate two operation instructions. FIG. 1c is a diagram of another example of an operation body performing at least two operations to trigger at least two operation commands; in fig. 1c, the user performs three consecutive operations, i.e., first slides the operation body from left to right, then controls the operation body to perform a parabolic operation with the end position of the sliding as a starting point, and finally controls the operation body to perform a circle drawing operation with the end position of the parabola as a starting point. FIG. 1d is a diagram of another example of triggering at least two operation commands for an operator to perform at least two operations; in fig. 1d, among the three operations performed by the user, a part of the operations are continuous and a part of the operations are discontinuous. Firstly, the operating body is slid from left to right, then the operating body is controlled to execute the parabola operation by taking the sliding end position as a starting point, and finally, the operating body is slid from bottom to top. Obviously, the first two operations in fig. 1d are consecutive, and the third operation is partially consecutive to the second operation.
It should be noted that the number of operations shown in fig. 1b to fig. 1d is merely an exemplary illustration, and more operations may be available in the implementation process, which is not described in detail herein.
Step S12: determining a response mode corresponding to each operation instruction;
different operation instructions correspond to different response modes. The different response modes correspond to different animations of the first virtual object. For convenience of description, the animation of the first virtual object corresponding to each response mode is denoted as a sub-animation.
Step S13: controlling the first virtual object to output the animation according to the determined response modes;
in the embodiment of the present invention, the animation output by the first virtual object is a combination of sub-animations corresponding to the respective response modes. In the embodiment of the present invention, the output order of the sub-animations corresponds to the receiving order of the operation commands, that is, the earlier the receiving time of the operation commands is, the earlier the output time of the sub-animation corresponding to the response mode determined based on the operation commands is.
Step S14: the animation data of the first virtual object is synchronized to the second electronic device.
In the embodiment of the present invention, in addition to controlling the first virtual object to output the animation on the first electronic device according to the determined response modes, the animation data of the first virtual object is synchronized to the second electronic device, so that the second electronic device side also outputs the animation of the first virtual object.
According to the information processing method provided by the embodiment of the invention, the first electronic equipment responds according to the plurality of operation instructions aiming at the first virtual object, so that the first virtual object outputs continuous animations at the first electronic equipment side, and animation data of the first virtual object is synchronized to the second electronic equipment, so that a user of the second electronic equipment can see the animations output by the first virtual object at the same time, and the interactivity among users is enhanced.
It should be noted that, in the following embodiments, the first virtual object is taken as a virtual object stored locally by the first electronic device, and the second virtual object is taken as a virtual object stored locally by the second electronic device. Alternatively, in other embodiments, the first virtual object may be a virtual object stored locally by the second electronic device, and the second virtual object may be a virtual object stored locally by the first electronic device. The invention is not limited in this regard.
In an optional embodiment, the first electronic device may receive file data of the second virtual object sent by the second electronic device, and display the second virtual object in the first electronic device according to the file data; meanwhile, the first electronic equipment can receive animation data of the second virtual object sent by the second electronic equipment; and displaying the animation of the second virtual object according to the animation data of the second virtual object, so that the first electronic equipment user can see the animation effect of the first virtual object and the animation effect of the second virtual object simultaneously, and the interactivity between the users is further enhanced. The animation data of the second virtual object is animation data of an animation output after a user of the second electronic device performs continuous operation on the second virtual object.
In an optional embodiment, one implementation manner of determining a response manner corresponding to each operation instruction may be:
and according to the receiving sequence of the operation instructions, sequentially determining a response mode corresponding to each operation instruction. Each time an operation instruction is received, a response mode corresponding to the operation instruction can be determined.
In an alternative embodiment, the first virtual object may be controlled to output the sub-animation in accordance with the determined response mode every time one response mode is determined. I.e. the sub-animation is output in real time.
In an alternative embodiment, after the response mode is determined to be completed, the first virtual object may be controlled to sequentially output the sub-animations according to the determined response modes.
Different from the previous embodiment, in the embodiment of the present invention, after response modes corresponding to all operation instructions are determined, sub-animations corresponding to the response modes are sequentially output. And as long as one operation instruction does not determine the response mode, not outputting the sub-animation corresponding to the determined response mode.
In an alternative embodiment, the first electronic device may be an electronic device having a touch screen. The at least two operation instructions may be triggered by the operation of the first virtual object on the touch screen of the first electronic device by the operation body. Based on this, another implementation flowchart of the information processing method provided by the embodiment of the present invention is shown in fig. 2, and may include:
step S21: at least two operation instructions for the first virtual object are received. The specific implementation manner of this step can refer to the embodiment shown in fig. 1, and is not described herein again.
Step S22: and determining an operation area of the operation body on the touch screen.
In general, the operation habits such as the operation range and the operation angle are different when different users operate the touch panel. In the embodiment of the invention, the operation area is determined according to the actual operation of the operation body, so that the user can operate the first virtual object according to the operation habit of the user, and the inconvenience brought to the operation of the user due to the fact that the operation area is set to be limited by the user is avoided.
Step S23: and determining a response mode corresponding to each operation instruction. The specific implementation manner of this step can refer to the embodiment shown in fig. 1, and is not described herein again.
In the embodiment of the present invention, the execution sequence of steps S22 and S23 is not limited, and step S22 may be executed first, and then step S23 may be executed; alternatively, step S23 is executed first, and then step S22 is executed; alternatively, step S22 is performed simultaneously with step S23.
Step S24: controlling the first virtual object to output animation in the operation area according to the determined response modes;
outside the operation area, some related virtual keys, such as a scene selection key, a sound effect key, etc., may be displayed to facilitate the user to make a corresponding selection.
Step S25: the animation data of the first virtual object is synchronized to the second electronic device. The specific implementation manner of this step can refer to the embodiment shown in fig. 1, and is not described herein again.
In an optional embodiment, one implementation manner of determining a response manner corresponding to each operation instruction may be:
determining a virtual scene where the first virtual object is located;
in the embodiment of the invention, a plurality of selectable virtual scenes are set, and a user can select one of the virtual scenes as the virtual scene where the first virtual object is located. That is, the virtual scene in which the first virtual object is located may be a virtual scene selected by the user.
And determining a response mode corresponding to each operation instruction under the determined virtual scene.
In the embodiment of the invention, under different virtual scenes, the response modes corresponding to the same operation instruction are different.
For example, if the virtual scene is a romantic music background, the response mode of the first virtual object is a softer response mode, that is, the animation output effect is: the motion amplitude of the first virtual object is small and soft; and if the virtual scene is rock music, the response mode of the first virtual object is wild, namely the animation output effect is as follows: the first virtual object has a large amplitude of motion and is full of force.
Optionally, the information processing method provided in the embodiment of the present invention may further include:
monitoring a duration of receiving at least two operating instructions for a first virtual object;
the duration is the duration of the operation of the user on the first virtual object.
When the duration is longer than the preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation;
if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object;
and if the user selects not to continue the operation, the receiving of the operation instruction aiming at the first virtual object is forbidden.
In the embodiment of the invention, in order to avoid the damage to the user caused by the long-time continuous interaction with the electronic equipment, the user is prompted when the monitored duration is longer than the preset duration, and the user selects whether to continue.
In an optional embodiment, when the duration is longer than the preset duration and the prompt message is output, the prompt message may further include a network currently used by the first electronic device, in addition to the duration and the selection information of whether to continue the operation.
So that the user can select whether to continue the operation according to the network use condition. The first electronic device and the second electronic device are prevented from using excessive flow through long-time interaction through the flow, and extra cost is avoided.
In an alternative embodiment, the voice of the user may be collected; recognizing the collected voice; when the identification result represents that the operation instruction aiming at the first virtual object stops being received, the first electronic equipment does not receive the operation instruction aiming at the first virtual object any more.
In an optional embodiment, after controlling the first virtual object to output the animation according to the determined response modes, animation data of the first virtual object may be further stored. So that animation data of the first virtual object can be directly used.
Fig. 3 shows a flowchart of another implementation of the information processing method according to an embodiment of the present invention, which may include:
step S31: the method comprises the steps that a first electronic device receives at least two first-class operation instructions aiming at a first virtual object; the first virtual object is a virtual object stored locally in the first electronic device;
step S32: the first electronic equipment determines a response mode corresponding to each first type of operation instruction;
step S33: the first electronic equipment controls the first virtual object to output the animation according to the determined response mode;
step S34: the first electronic device synchronizes animation data of the first virtual object to the second electronic device.
Step S35: the second electronic device outputs the animation of the first virtual object according to the animation data of the first virtual object.
In an alternative embodiment, step S33 may be that after determining that the response mode is completed, the first virtual object is controlled to sequentially output the sub-animations according to the determined response modes.
In another alternative embodiment, step S33 may also control the first virtual object to output the sub-animation according to the determined response mode every time a response mode is determined. I.e. the sub-animation is output in real time. At this time, after the animation is output in each response mode, the steps S34 and S35 are executed; alternatively, step S33, step S34, and step S35 are performed simultaneously.
It should be noted that the execution sequence of steps S33 and S34 is not limited, and step S33 may be executed first, and then step S34 may be executed; alternatively, step S33 is performed simultaneously with step S34.
On the basis of the embodiment shown in fig. 3, a flowchart of another implementation of the information processing method provided in the embodiment of the present invention is shown in fig. 4, and may include:
step S31: the method comprises the steps that a first electronic device receives at least two first-class operation instructions aiming at a first virtual object; the first virtual object is a virtual object stored locally by the first electronic device.
Step S32: the first electronic equipment determines a first type response mode corresponding to each first type operation instruction.
Step S33: and the first electronic equipment controls the first virtual object to output the animation according to the determined first-class response mode.
Step S34: the first electronic device synchronizes animation data of the first virtual object to the second electronic device.
Step S35: the second electronic device outputs the animation of the first virtual object according to the animation data of the first virtual object.
Step S41: the second electronic equipment receives at least two second-class operation instructions aiming at the second virtual object; the second virtual object is a virtual object stored locally in the second electronic device; the first virtual object and the second virtual object may be displayed simultaneously.
Step S42: and the second electronic equipment determines a second type of response mode corresponding to each second type of operation instruction.
Step S43: and the second electronic equipment controls the second virtual object to output the animation according to the determined response modes of the second class.
Step S44: the second electronic device synchronizes the animation data of the second virtual object to the first electronic device.
Step S45: the first electronic device outputs the animation of the second virtual object according to the animation data of the second virtual object. The first virtual object and the second virtual object may be displayed simultaneously.
With the embodiment shown in fig. 4, the first virtual object and the second virtual object can be displayed simultaneously on both the first electronic device side and the second electronic device side.
In an alternative embodiment, steps S31 to S35 and steps S41 to S45 may be performed simultaneously, and steps S41 to S45 do not need to be performed after steps S31 to S35 are performed. For example, a first electronic device user may continue to operate a first virtual object, the first electronic device receives at least two operation instructions for the first virtual object and then sends animation data of the first virtual object to a second electronic device, the second electronic device may continue to operate a second virtual object while the first electronic device user operates the first virtual object, and the second electronic device receives at least two operation instructions for the second virtual object and then sends animation data of the second virtual object to the first electronic device.
Based on the above, the first electronic device simultaneously outputs the animation of the first virtual object and the animation of the second virtual object to form an interactive picture of the first virtual object and the second virtual object; similarly, the second electronic device also outputs the animation of the first virtual object and the animation of the second virtual object at the same time to form an interactive picture of the first virtual object and the second virtual object.
In another optional embodiment, on the first electronic device side, the first electronic device user may further perform a continuous operation on the second virtual object; on the second electronic device side, the second electronic device user may also perform continuous operations on the first virtual object.
Specifically, a user of the first electronic device operates the second virtual object, and the first electronic device receives at least two first-class operation instructions for the second virtual object; determining a first type response mode corresponding to each first type operation instruction; controlling the second virtual object to output the animation according to the determined first response modes; and synchronizing the animation data of the second virtual object to the second electronic device.
The method comprises the steps that while a first electronic equipment user operates a second virtual object, the second electronic equipment user can operate the first virtual object, and the second electronic equipment receives at least two second-class operation instructions aiming at the first virtual object; determining a second type response mode corresponding to each second type operation instruction; controlling the second virtual object to output the animation according to the determined second response modes; and synchronizing the animation data of the second virtual object to the second electronic device.
Based on the method, the first electronic equipment can simultaneously output the animation of the first virtual object and the animation of the second virtual object to form an interactive picture of the first virtual object and the second virtual object; similarly, the second electronic device also outputs the animation of the first virtual object and the animation of the second virtual object at the same time to form an interactive picture of the first virtual object and the second virtual object.
Fig. 5 shows a flowchart of another implementation of the information processing method according to the embodiment of the present invention, which may include:
step S51: and the first electronic equipment receives and displays the second virtual object sent by the second electronic equipment, and at the moment, the first electronic equipment simultaneously displays the first virtual object and the second virtual object.
Step S52: the first electronic equipment responds to a connection instruction triggered by a first user and controls a first virtual object to output a first animation;
step S53: and sending the first virtual object and the first animation data of the first virtual object to the second electronic equipment.
The connection instruction may be triggered by a long press operation by a user of the first electronic device (i.e., the first user) on a touch screen of the first electronic device. At this time, the first user can perform long-press operation on the first virtual object.
In an alternative example, the first animation may be an outstretched invitation.
Step S54: and after the second electronic equipment receives the first virtual object and the first animation data of the first virtual object sent by the first electronic equipment, controlling the first virtual object to output the first animation according to the first animation data of the first virtual object. At this time, the second electronic device simultaneously displays the first virtual object and the second virtual object, and the first virtual object outputs the first animation.
Step S55: the second electronic equipment responds to a response instruction triggered by a second user and controls the second electronic equipment to output a second animation;
step S56: and sending second animation data of the second virtual object to the first electronic equipment.
After the second electronic device displays the first virtual object and the animation thereof, if a user of the second electronic device (i.e., the second user) wants to interact with the first user, a long-time pressing operation can be performed on a touch screen of the second electronic device to trigger a response instruction. The second user may perform a long-press operation trigger response instruction on the first virtual object, for example, perform a long-press operation trigger response instruction on a hand of the first virtual object.
In an alternative example, the second animation may be an outstretched accept invitation. Thus, the user of the second electronic device can see the animation that the second virtual object extends to receive the invitation of the first virtual object after the first virtual object extends to do the invitation.
Step S57: and after receiving second animation data of the second virtual object sent by the second electronic equipment, the first electronic equipment controls the second virtual object to output the second animation according to the second animation data. Thus, the user of the first electronic device can see the animation that the second virtual object extends to receive the invitation of the first virtual object after the first virtual object extends to do the invitation.
Step S58: the first electronic device receives at least two first-class operation instructions continuously input aiming at the second virtual object, and controls the second virtual object to output a third animation, such as dancing, in response to the at least two first-class operation instructions. The specific implementation process can refer to the embodiment shown in fig. 1. The at least two first-class operation instructions which are continuously input can be triggered and generated by a series of dragging operations performed by a user.
Step S59: the first electronic device transmits the third animation data to the second electronic device.
Step S510: the second electronic device receives at least two second-class operation instructions which are continuously input aiming at the first virtual object, and controls the first virtual object to output a fourth animation, such as dancing, in response to the at least two second-class operation instructions. The specific implementation process can refer to the embodiment shown in fig. 1. The at least two second-class operation instructions which are continuously input can be triggered and generated by a series of dragging operations performed by a user.
Step S511: the second electronic device transmits the fourth animation data to the first electronic device.
The execution sequence of step S58 and step S510 is not particularly limited. Step S58 may be performed prior to step S510, or may be performed later than step S510, or both steps may be performed simultaneously.
Step S512: and after the second electronic equipment receives the third animation data sent by the first electronic equipment, the second electronic equipment controls the second virtual object to output the third animation.
Step S513: and after the first electronic equipment receives the fourth animation data sent by the second electronic equipment, the first virtual object is controlled to output the fourth animation.
The user of the second electronic device can see the animation of the dancing interaction of the first virtual object and the second virtual object through step S512, and the user of the first electronic device can also see the animation of the dancing interaction of the first virtual object and the second virtual object through step S513. The user feels the same as the other party in the real world.
Corresponding to the method embodiment, an embodiment of the present application further provides a first electronic device, and a schematic structural diagram of the first electronic device provided in the embodiment of the present invention is shown in fig. 6, and may include:
a communication assembly 61, a display 62 and a processor 63; wherein,
the communication component 61 is used for communicating with a second electronic device;
the processor 63 is configured to receive at least two operation instructions for the first virtual object; determining a response mode corresponding to each operation instruction; controlling the first virtual object to output animation through the display screen 62 according to the determined response modes; the animation data of the first virtual object is synchronized to the second electronic device.
According to the first electronic device provided by the embodiment of the invention, the first electronic device responds according to the plurality of operation instructions aiming at the first virtual object, so that the first virtual object outputs continuous animations at the first electronic device side, and animation data of the first virtual object is synchronized to the second electronic device, so that a user of the second electronic device can see the animations output by the first virtual object at the same time, and the interactivity among users is enhanced.
In an alternative embodiment, the determining, by the processor 63, the response mode corresponding to each operation instruction may include:
the processor 63 determines the response mode corresponding to each operation instruction in turn in accordance with the reception order of the operation instructions.
In an alternative embodiment, the processor 63 controlling the first virtual object to output the animation according to the determined response mode may include:
the processor 63 controls the first virtual object to output the sub-animation according to the determined response mode every time the processor determines a response mode;
or,
after determining that the response mode is completed, the processor 63 controls the first virtual object to sequentially output the sub-animations according to each determined response mode.
In an alternative embodiment, the display screen 62 may be a touch screen, and the at least two operation instructions are triggered by an operation body operating on the touch screen; the processor 63 may be further configured to, after receiving at least two operation instructions for the second virtual object, determine an operation area of the operator on the touch screen before controlling the first virtual object to output the animation according to the determined respective response modes;
accordingly, the controlling of the first virtual object by the processor 63 to output the animation according to the determined respective response modes may include:
the processor 63 controls the first virtual object to output animation in the operation region in accordance with the determined respective response manners.
In an alternative embodiment, the first virtual object may be a virtual object stored locally at the first electronic device, or the first virtual object may be a virtual object transmitted by the second electronic device received by the first electronic device.
In an alternative embodiment, the determining, by the processor 63, the response mode corresponding to each operation instruction may include:
the processor 63 determines the virtual scene in which the first virtual object is located; and determining a response mode in the virtual scene corresponding to each operation instruction.
In an optional embodiment, the processor 63 may be further configured to monitor a duration of receiving at least two operation instructions for the first virtual object; when the duration is longer than the preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation; if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object; and if the user selects not to continue the operation, the receiving of the operation instruction aiming at the first virtual object is forbidden.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM Memory, flash Memory, ROM Memory, EPROM Memory, EEPROM (electrically erasable Programmable Read-Only Memory) Memory, registers, a hard disk, a removable disk, a CD-ROM (Compact Disc Read-Only Memory), or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. An information processing method applied to a first electronic device, the method comprising:
receiving at least two operation instructions aiming at a first virtual object;
determining a response mode corresponding to each operation instruction;
controlling the first virtual object to output animation according to the determined response modes;
and synchronizing the animation data of the first virtual object to a second electronic device.
2. The method of claim 1, wherein determining the response mode corresponding to each operation instruction comprises:
and according to the receiving sequence of the operation instructions, sequentially determining a response mode corresponding to each operation instruction.
3. The method of claim 2, wherein controlling the first virtual object to output an animation in the determined responsive manner comprises:
controlling the first virtual object to output the sub-animation according to the determined response mode every time one response mode is determined;
or,
and after the response mode is determined to be finished, controlling the first virtual object to sequentially output the sub-animations according to each determined response mode.
4. The method according to claim 1, wherein the at least two operation instructions are triggered by an operation of an operation body on a touch screen of the first electronic device; after receiving at least two operation instructions for the first virtual object, before controlling the first virtual object to output an animation according to the determined response modes, the method further comprises:
determining an operation area of the operation body on the touch screen;
the controlling the first virtual object to output the animation according to the determined response modes comprises:
and controlling the first virtual object to output animation in the operation area according to the determined response modes.
5. The method according to claim 1, wherein the first virtual object is a virtual object stored locally by the first electronic device, or the first virtual object is a virtual object transmitted by the second electronic device and received by the first electronic device.
6. The method of claim 1, wherein determining the response mode corresponding to each operation instruction comprises:
determining a virtual scene in which the first virtual object is located;
and determining a response mode corresponding to each operation instruction in the virtual scene.
7. The method of claim 1, further comprising:
monitoring a duration of receiving at least two operational instructions for the first virtual object;
when the duration is longer than a preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation;
if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object;
and if the user selects not to continue the operation, prohibiting receiving the operation instruction aiming at the first virtual object.
8. A first electronic device, comprising:
a communication component, a display screen and a processor; wherein,
the communication component is used for communicating with a second electronic device;
the processor is used for receiving at least two operation instructions aiming at the first virtual object; determining a response mode corresponding to each operation instruction; controlling the first virtual object to output animation according to the determined response modes; and synchronizing the animation data of the first virtual object to a second electronic device.
9. The first electronic device of claim 8, wherein the processor determining a response mode corresponding to each operational instruction comprises:
and the processor sequentially determines a response mode corresponding to each operation instruction according to the receiving sequence of the operation instructions.
10. The first electronic device according to claim 8, wherein the display screen is a touch screen, and the at least two operation instructions are triggered by an operation performed by an operation body on the touch screen;
the processor is further used for determining an operation area of the operation body on the touch screen after receiving at least two operation instructions aiming at the first virtual object and before controlling the first virtual object to output animation according to the determined response modes;
the processor controlling the first virtual object to output the animation according to the determined response modes comprises the following steps:
and the processor controls the first virtual object to output animation in the operation area according to the determined response modes.
11. The first electronic device of claim 8, wherein the processor determining a response mode corresponding to each operational instruction comprises:
the processor determining a virtual scene in which the first virtual object is located; and determining a response mode corresponding to each operation instruction in the virtual scene.
12. The first electronic device of claim 8, wherein the process is further configured to monitor a duration of receiving at least two operational instructions for the first virtual object; when the duration is longer than a preset duration, outputting prompt information; the prompt message comprises duration and selection information of whether to continue operation; if the user selects to continue the operation, continuing to receive an operation instruction aiming at the first virtual object; and if the user selects not to continue the operation, prohibiting receiving the operation instruction aiming at the first virtual object.
CN201611168562.2A 2016-12-16 2016-12-16 Information processing method and electronic equipment Active CN106775243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611168562.2A CN106775243B (en) 2016-12-16 2016-12-16 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611168562.2A CN106775243B (en) 2016-12-16 2016-12-16 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN106775243A true CN106775243A (en) 2017-05-31
CN106775243B CN106775243B (en) 2020-02-11

Family

ID=58893164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611168562.2A Active CN106775243B (en) 2016-12-16 2016-12-16 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN106775243B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656464A (en) * 2018-12-29 2019-04-19 北京字节跳动网络技术有限公司 A kind of processing method of interaction data, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279279A (en) * 2013-05-20 2013-09-04 南京恒知讯科技有限公司 Electronic drawing board system, data processing method and device based on multi-user collaborative operation
CN103488371A (en) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 Method for making animation on mobile terminal and mobile terminal
CN103838808A (en) * 2012-11-26 2014-06-04 索尼公司 Information processing apparatus and method, and program
CN104407764A (en) * 2013-10-29 2015-03-11 贵阳朗玛信息技术股份有限公司 Method and device of presenting scene effect
CN105677362A (en) * 2016-02-03 2016-06-15 广州市久邦数码科技有限公司 Image drawing method and system
CN105892650A (en) * 2016-03-28 2016-08-24 联想(北京)有限公司 Information processing method and electronic equipment
US20160283020A1 (en) * 2015-03-23 2016-09-29 Lg Electronics Inc. Mobile terminal and control method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488371A (en) * 2012-06-11 2014-01-01 中兴通讯股份有限公司 Method for making animation on mobile terminal and mobile terminal
CN103838808A (en) * 2012-11-26 2014-06-04 索尼公司 Information processing apparatus and method, and program
CN103279279A (en) * 2013-05-20 2013-09-04 南京恒知讯科技有限公司 Electronic drawing board system, data processing method and device based on multi-user collaborative operation
CN104407764A (en) * 2013-10-29 2015-03-11 贵阳朗玛信息技术股份有限公司 Method and device of presenting scene effect
US20160283020A1 (en) * 2015-03-23 2016-09-29 Lg Electronics Inc. Mobile terminal and control method thereof
CN105677362A (en) * 2016-02-03 2016-06-15 广州市久邦数码科技有限公司 Image drawing method and system
CN105892650A (en) * 2016-03-28 2016-08-24 联想(北京)有限公司 Information processing method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656464A (en) * 2018-12-29 2019-04-19 北京字节跳动网络技术有限公司 A kind of processing method of interaction data, device, equipment and storage medium

Also Published As

Publication number Publication date
CN106775243B (en) 2020-02-11

Similar Documents

Publication Publication Date Title
US20230229293A1 (en) Method, apparatus, and terminal for processing notification information
TWI511039B (en) Mode switching
CN104793915B (en) A kind of multi-screen display method and terminal
KR101381484B1 (en) Mobile device having a graphic object floating function and execution method using the same
CN105260109B (en) A kind of playing speed adjusting method and terminal
CN109032485A (en) Display method and device, electronic equipment, intelligent panel and storage medium
US8972858B2 (en) Configuration interface for a programmable multimedia controller
CN109219796A (en) Digital touch on real-time video
CN106716357B (en) Control method, control device and the electronic equipment of multisystem mobile terminal
CN105335048A (en) Electron equipment with concealed application icon and application icon conceal method
CN104573552A (en) Method and device for hiding application icons
EP3557384A1 (en) Device and method for providing dynamic haptic playback for an augmented or virtual reality environments
CN105867754B (en) Application interface processing method and processing device
CN105824693B (en) A kind of control method that multitask is shown and mobile terminal
CN105205379A (en) Control method and device for terminal application and terminal
CN109491562A (en) Interface display method of voice assistant application program and terminal equipment
CN112866798A (en) Video generation method, device, equipment and storage medium
CN108052254B (en) Information processing method and electronic equipment
CN105824493B (en) A kind of control method and mobile terminal of mobile terminal
CN106940618A (en) The player method and device of a kind of voice messaging
CN106775243B (en) Information processing method and electronic equipment
CN108769819A (en) Playing progress rate control method, medium, device and computing device
WO2017101604A1 (en) Cursor selected object switching method, device and terminal
WO2023173868A1 (en) Interface editing method and apparatus, electronic device, and readable medium
CN103744515B (en) A kind of information processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190215

Address after: 361000 Fujian Xiamen Torch High-tech Zone Software Park Innovation Building Area C 3F-A193

Applicant after: Xiamen Black Mirror Technology Co., Ltd.

Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000

Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant