CN113546420B - Virtual object control method and device, storage medium and electronic equipment - Google Patents

Virtual object control method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113546420B
CN113546420B CN202110839197.8A CN202110839197A CN113546420B CN 113546420 B CN113546420 B CN 113546420B CN 202110839197 A CN202110839197 A CN 202110839197A CN 113546420 B CN113546420 B CN 113546420B
Authority
CN
China
Prior art keywords
virtual
virtual object
bone
user
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110839197.8A
Other languages
Chinese (zh)
Other versions
CN113546420A (en
Inventor
夏琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110839197.8A priority Critical patent/CN113546420B/en
Publication of CN113546420A publication Critical patent/CN113546420A/en
Application granted granted Critical
Publication of CN113546420B publication Critical patent/CN113546420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a virtual object control method, a virtual object control device, a storage medium and electronic equipment. The method comprises the following steps: acquiring action data of a preset part of a user; controlling the movement of a first virtual part of the virtual object according to the action data and the first association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object. The method and the device can improve harmony and flexibility of virtual object movement and enrich the expression effect.

Description

Virtual object control method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for controlling a virtual object, a storage medium, and an electronic device.
Background
The application of virtual objects such as a virtual anchor, a virtual idol and the like is becoming popular, and a real experience feeling of broken secondary communication is created for vermicelli by means of advertisements, pronouncing, performances, network living broadcast and the like. However, the existing virtual object can realize the synchronous movement of the head by capturing the movement of the head of a real person, and the body can only perform the preset action, so that the whole movement of the virtual object is quite dissonant, the flexibility of the virtual object is poor, and the performance effect is poor.
Disclosure of Invention
The embodiment of the application provides a control method and device for a virtual object, a storage medium and electronic equipment, which can improve the harmony and flexibility of the movement of the virtual object and enrich the expression effect.
The embodiment of the application provides a control method of a virtual object, which comprises the following steps:
acquiring action data of a preset part of a user;
controlling the movement of a first virtual part of the virtual object according to the action data and the first association relation between the preset part of the user and the first virtual part of the virtual object;
and controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object.
Optionally, the first association relationship is a one-to-one correspondence relationship between the user preset part and a first virtual part of the virtual object;
the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part.
Optionally, the motion state of the first virtual part includes a first motion parameter, and the first motion parameter includes at least one of the following: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters; the motion state of the second virtual part comprises a second motion parameter, and the second motion parameter comprises at least one of the following: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters;
The binding relation is the corresponding relation between the first motion parameter and the second motion parameter.
Optionally, the binding relationship is a proportional relationship between the degree of change of the first motion parameter and the degree of change of the second motion parameter.
Optionally, the binding relationship is a correspondence relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.
Optionally, the virtual object is constructed with a virtual model, and the virtual model comprises a first skeleton corresponding to the first virtual part;
the controlling the movement of the first virtual part of the virtual object according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object comprises the following steps:
determining bone data of a first bone corresponding to the first virtual part according to the action data and the association relation between the preset part of the user and the first bone;
and controlling the movement of the first virtual part of the virtual object according to the bone data of the first bone.
Optionally, the virtual model further comprises a second bone corresponding to the second virtual location;
The controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object comprises the following steps:
determining bone data of a second bone according to the bone data of the first bone and the association relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;
and controlling the movement of a second virtual part of the virtual object according to the bone data of the second bone.
Optionally, the bone data comprises at least one of: rotation data, scaling data, movement data.
Optionally, the controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object includes:
detecting the number of movements of the first virtual part;
and when the first virtual part moves for a preset number of times, controlling the second virtual part of the virtual object to move according to a second association relation between the first virtual part and the second virtual part of the virtual object.
Optionally, the virtual model includes at least one of: three-dimensional model, two-dimensional model.
Optionally, the preset portion is a head, the first virtual portion is a head, and the second virtual portion includes at least one of the following: trunk, limbs, ears, tails, wings.
Optionally, the preset portion and the first virtual portion are one of five sense organs, and the second virtual portion includes at least one of the following: one of the five sense organs, torso, limbs, tail, wings different from the first virtual location.
The embodiment of the application also provides a control device of the virtual object, which comprises:
the acquisition module is used for acquiring action data of a preset part of a user;
the first control module is used for controlling the movement of the first virtual part of the virtual object according to the action data and the first association relation between the preset part of the user and the first virtual part of the virtual object; the method comprises the steps of,
and the second control module is used for controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object.
Embodiments of the present application also provide a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the steps in the method for controlling a virtual object according to any of the embodiments above.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the steps in the control method of the virtual object according to any embodiment by calling the computer program stored in the memory.
According to the virtual object control method, the virtual object control device, the storage medium and the electronic equipment, action data of a preset part of a user are obtained; controlling the movement of a first virtual part of the virtual object according to the action data and the association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the association relation between the first virtual part and the second virtual part of the virtual object. According to the method and the device for controlling the movement of the virtual object, when the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the association relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of a plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and the flexibility of the movement of the virtual object are improved, and the display effect is enriched.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic system diagram of a control device for a virtual object according to an embodiment of the present application.
Fig. 2 is a flow chart of a control method of a virtual object according to an embodiment of the present application.
Fig. 3a is a schematic view of an effect of opening the left eye of the three-dimensional virtual object in the control method of the virtual object according to the embodiment of the present application.
Fig. 3b is a schematic diagram of an effect of opening the left eye of the two-dimensional virtual object in the control method of the virtual object according to the embodiment of the present application.
Fig. 4a is a schematic view of an effect of looking up the left eye of a three-dimensional virtual object in the control method of a virtual object according to the embodiment of the present application.
Fig. 4b is a schematic view of an effect of looking up the left eye of a two-dimensional virtual object in the control method of a virtual object according to the embodiment of the present application.
Fig. 5a is a schematic view of an effect of looking down the left eye of a three-dimensional virtual object in the control method of a virtual object according to the embodiment of the present application.
Fig. 5b is a schematic view of an effect of looking down the left eye of a two-dimensional virtual object in the control method of a virtual object according to the embodiment of the present application.
Fig. 6a is a schematic diagram of a first effect of a first virtual part driving a second virtual part to move in the control method of a virtual object according to the embodiment of the present application.
Fig. 6b is a schematic diagram of a second effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 6c is a schematic diagram illustrating a third effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 6d is a schematic diagram of a fourth effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 7a is a schematic diagram of an effect of the virtual object control method provided in the embodiment of the present application when the first virtual part and the second virtual part do not move.
Fig. 7b is a schematic diagram of a fifth effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 7c is a schematic diagram of a sixth effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 7d is a schematic diagram of a seventh effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 7e is a schematic diagram of an eighth effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 7f is a schematic diagram of a ninth effect of the first virtual part driving the second virtual part to move in the control method of the virtual object according to the embodiment of the present application.
Fig. 8 is another flow chart of a control method of a virtual object according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The embodiment of the application provides a virtual object control method, a virtual object control device, a storage medium and electronic equipment. Specifically, the method for controlling the virtual object in the embodiment of the present application may be executed by an electronic device, where the electronic device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a personal computer (Personal Computer, PC), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, where the client may be an application client, a browser client carrying control software of a virtual object, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
For example, when the control method of the virtual object is run on the terminal, the terminal device stores control software of the virtual object. The terminal device is used for interacting with a user through a graphical user interface, for example, the terminal device downloads and runs control software for installing the virtual object. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a control interface for a virtual object and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running control software for the virtual object, generating the graphical user interface, responding to the operation instructions, and controlling display of the graphical user interface on the touch display screen.
Referring to fig. 1, fig. 1 is a schematic system diagram of a control device for a virtual object according to an embodiment of the present application. The system may include at least one terminal 1000, at least one server 2000, at least one database 3000, and a network 4000. Terminal 1000 in the possession of a user can be connected to different servers via network 4000. Terminal 1000 can be any device having computing hardware capable of supporting and executing software products corresponding to the control methods of virtual objects. Terminal 1000 can include a motion capture device and/or a face capture device, such as a camera or the like, for gathering user motion data. The motion capture device and/or the face capture device may be integrated into one terminal 1000 (e.g., a smart phone, tablet, notebook, etc.), or may be separate terminals 1000, as shown in fig. 1.
In addition, when the system includes a plurality of terminals 1000, a plurality of servers 2000, and a plurality of networks 4000, different terminals 1000 may be connected to each other through different networks 4000, through different servers 2000. The network 4000 may be a wireless network or a wired network, such as a WLAN (Wireless Local Area Network ), LAN (Local Area Network, local area network), cellular network, 2G network, 3G network, 4G network, 5G network, etc. In addition, the different terminals 1000 may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. For example, multiple users may be online through different terminals 1000 so as to be connected through an appropriate network and synchronized with each other to support multiple-person control of virtual objects. In addition, the system may include a plurality of databases 3000, the plurality of databases 3000 being coupled to different servers 2000, and information about virtual objects, such as action data, association relationships, bone data, etc., may be stored in the databases 3000.
The embodiment of the application provides a control method of a virtual object, which can be executed by a terminal or a server. The embodiment of the application is described by taking a control method of a virtual object as an example executed by a terminal. The terminal comprises a touch display screen and a processor, wherein the touch display screen is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface. When a user operates the graphical user interface through the touch display screen, the graphical user interface can control the local content of the terminal by responding to the received operation instruction, and can also control the content of the opposite-end server by responding to the received operation instruction. For example, the operating instructions generated by the user acting on the graphical user interface include instructions for launching control software of the virtual object, and the processor is configured to launch control software of the virtual object after receiving the user provided instructions for launching control software of the virtual object. A touch display screen is a multi-touch-sensitive screen capable of sensing touch or slide operations performed simultaneously by a plurality of points on the screen. The user performs touch operation on the graphical user interface by using a finger, and when the graphical user interface detects the touch operation, the graphical user interface controls different virtual objects in the graphical user interface to execute actions corresponding to the touch operation. The processor may be configured to present a corresponding interface in response to an operation instruction generated by a touch operation of the user.
The following describes in detail specific embodiments.
In the present embodiment, description will be made from the viewpoint of a control apparatus of a virtual object, which may be integrated in an electronic device such as a terminal or a server.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for controlling a virtual object according to an embodiment of the invention. The specific flow of the method can be as follows:
step 101, obtaining action data of a preset part of a user.
In this embodiment, the motion data of the preset portion of the user may be collected by setting the motion capture device and/or the face capture device at the user, and after the motion capture device and/or the face capture device collect the motion data of the preset portion of the user, the motion data is transmitted to the electronic device, so that the electronic device obtains the motion data of the preset portion of the user. The user preset parts refer to key parts of the user, may include one or more parts, and generally do not include all parts of the user, for example, the user preset parts may be the head of the user, the face of the user (at least including one of five sense organs), and the like. The motion data refers to motion parameters corresponding to real-time motion of a preset part of a user, such as deflection angle of the head of the user, opening and closing degree of eyes of the user, and the like.
For example, the motion capture device may be disposed on the head of the user, i.e., the user's preset area is the user's head, and the motion capture device captures the angle of deflection of the user's head and transmits it to the electronic device. The face capturing device may be disposed on a face of the user, that is, the preset portion of the user is one of the five sense organs (such as eyes) of the user, and the face capturing device collects the opening and closing degree of the eyes of the user and transmits the opening and closing degree to the electronic device.
Step 102, controlling the movement of the first virtual part of the virtual object according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object.
In this embodiment, the virtual object may be a virtual character that is a virtual idol, a virtual anchor, and the virtual object may be a virtual character, a virtual animal, or the like. The virtual object may be rendered for display on a display interface of the electronic device or presented by holographic projection.
The association relationship between each part of the user and each virtual part of the virtual object is established in advance, and since only the action data of the preset part of the user is acquired in step 101, only the first association relationship between the preset part of the user and the first virtual part of the virtual object may be established. The first association relationship refers to a one-to-one correspondence between a preset part of the user and a first virtual part of the virtual object. The first virtual part may refer to a part of the virtual object that is the same as a user preset part, the user preset part may include at least one (one or more) part of the user, the first virtual part may include at least one part of the virtual object, and the at least one part of the user corresponds to the at least one part of the virtual object one by one. For example, the user preset portion is a head of the user, and the first virtual portion is a head of the virtual object; the user preset portion is a face of the user (including at least one of the five sense organs, such as eyes), and the first virtual portion is a face of the virtual object (including at least one of the five sense organs, such as eyes).
After the action data of the preset part of the user is obtained, determining a first virtual part of the virtual object corresponding to the preset part of the user according to a preset first association relation, and controlling the first virtual part of the virtual object to move according to the action data so as to ensure that the virtual object moves along with the user. For example, the user's head deflects, and the virtual object's head deflects synchronously; the user blinks and the virtual object blinks synchronously.
The virtual object is constructed with a virtual model, the virtual model can be a two-dimensional model or a three-dimensional model, the virtual model can also comprise a two-dimensional model and a three-dimensional model at the same time, namely, the virtual object is respectively constructed with the two-dimensional model and the three-dimensional model, the virtual object constructed with the two-dimensional model is a two-dimensional virtual object, and the virtual object constructed with the three-dimensional model is a three-dimensional virtual object. The two-dimensional model and the three-dimensional model can set control parameters and performance effects according to the same standard, namely, the same control parameters are input into the two-dimensional model and the three-dimensional model, and the same performance effects can be achieved. For example, the motion data of the preset part of the user is respectively input into the two-dimensional model and the three-dimensional model, and the two-dimensional model and the three-dimensional model can be respectively controlled to synchronously move along with the preset part of the user. The two-dimensional model has the advantages of more secondary images, the three-dimensional model has the advantages of finer actions and higher synchronization rate with users, and the two-dimensional model and the three-dimensional model adopt the same standard in the embodiment, so that the advantages of both parties can be enhanced, different standards can be avoided from being adopted to process data, and the data processing capacity is reduced.
The three-dimensional model and the two-dimensional model respectively comprise a set of basic bones, each set of basic bones comprises a first bone corresponding to a first virtual part, and the first bone can comprise one basic bone or a plurality of basic bones. The number of underlying bones in the corresponding first bones may be different for different first virtual locations. The present embodiment achieves movement of the first virtual location by controlling movement of the first bone.
Specifically, the controlling the movement of the first virtual part of the virtual object according to the motion data and the first association relationship between the user preset part and the first virtual part of the virtual object in step 102 includes: determining bone data of a first bone corresponding to the first virtual part according to the action data and the association relation between the preset part and the first bone; and controlling the movement of the first virtual part of the virtual object according to the bone data of the first bone.
The association relation between the user preset part and the first skeleton is preset, namely the corresponding relation between the user preset part and the first skeleton, so that the first association relation between the user preset part and the first virtual part of the virtual object is established. After the action data of the preset part of the user is obtained, determining a first skeleton associated with the preset part of the user according to the association relation, and determining skeleton data of the first skeleton according to the action data. The conversion relationship between the motion data and the bone data of the first bone can be preset, so that the bone data of the first bone can be rapidly determined according to the conversion relationship after the motion data is acquired. The skeleton data of the first skeleton is input to the first skeleton, so that the first skeleton can act, and the first skeleton can realize the synchronous movement of the first virtual part of the virtual object along with the preset part of the user.
The bone data of the first bone includes at least one of: rotation data, scaling data, movement data. The rotation data may be a rotation angle according to which the first bone may perform a rotation operation, and the first virtual part may be rotated according to the rotation operation of the first bone; the scaling data may be a scaling ratio according to which the first bone may perform a scaling action, and the first virtual part may be scaled according to the scaling action of the first bone; the movement data may be movement displacement, the first bone may perform movement according to the movement displacement, and the first virtual part may move according to the movement of the first bone.
For example, when the head of the user deflects leftwards, the left deflection angle of the head of the user is obtained, the first virtual part of the virtual object is determined to be the head of the virtual object, further, the first bone corresponding to the head of the virtual object is determined to be the head bone, and meanwhile, bone data of the head bone is determined according to the left deflection angle of the head of the user, so that the bone data is input to the head bone to control the head bone to act, so that the head of the virtual object deflects leftwards, and the deflection angle is the same as the left deflection angle of the head of the user. For another example, the left eye of the user is opened (the left eyelid of the user moves upwards), the degree of opening of the left eye of the user is obtained, the first virtual part of the virtual object is determined to be the left eyelid of the virtual object, the first bone corresponding to the left eyelid of the virtual object is determined to be the left eyelid bone, meanwhile, bone data of the left eyelid bone is determined according to the degree of opening of the left eye of the user, so that the bone data is input to the left eyelid bone, the left eyelid bone action is controlled, the left eyelid of the virtual object moves upwards, namely, the left eye of the virtual object is opened, and the opening degree is the same as the opening degree of the left eye of the user. Referring to fig. 3a and 3b, fig. 3a is a schematic view of an effect of left eye opening of a three-dimensional virtual object, and fig. 3b is a schematic view of an effect of left eye opening of a two-dimensional virtual object.
Similarly, the right eyes of the user are opened, and the right eyelid of the virtual object moves upwards, so that the right eyes of the virtual object are synchronously opened. The eyes of the user are upward seen (relative to the normal open eyes of the user, the eyelid of the user moves upward, and the opening degree of the eyes of the user is larger than that of the normal open eyes of the user), and the eyelid of the virtual object moves upward, so that the eyes of the virtual object synchronously upward see. Referring to fig. 4a and 4b, fig. 4a is a schematic view of an effect of a three-dimensional virtual object seen upward by eyes, and fig. 4b is a schematic view of an effect of a two-dimensional virtual object seen upward by eyes. The eyes of the user look downwards (the eyes of the user move downwards relative to the eyes of the user which are normally open, and the opening degree of the eyes of the user is smaller than the opening degree of the eyes which are normally open at the moment), and the eyes of the virtual object move downwards, so that the eyes of the virtual object look downwards synchronously. Referring to fig. 5a and 5b, fig. 5a is a schematic view of an eye-down effect of a three-dimensional virtual object, and fig. 5b is a schematic view of an eye-down effect of a two-dimensional virtual object. The eyes of the user look left and right (the eyeballs move left and right), and the eyeballs of the virtual object move left and right, so that the eyes of the virtual object look left and right synchronously. The user's eyebrows move up and down, and the virtual object's eyebrows move up and down synchronously. The user opens the mouth, the mouth shape of the virtual object is enlarged, and the virtual object opens the mouth synchronously.
And step 103, controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object.
In this embodiment, the association relationship between each virtual part and other virtual parts in the virtual object is pre-established, and in step 101, only the action data of the user preset part is acquired, so only the second association relationship between the first virtual part and other virtual parts (i.e. the second virtual part) associated with the user preset part may be established. The second association relationship refers to a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part. The motion state of the first virtual part comprises a first motion parameter, and the first motion parameter comprises at least one of the following: rotation parameters, opening and closing parameters, rocking parameters, scaling parameters, etc. For example, if the first virtual part is an eye, the motion state of the first virtual part includes an opening and closing parameter (first motion parameter). The motion state of the second virtual part comprises a second motion parameter, and the second motion parameter comprises at least one of the following: rotation parameters, opening and closing parameters, rocking parameters, scaling parameters, etc. For example, the second virtual part is an ear, and the motion state of the second virtual part includes a rotation parameter (second motion parameter).
The binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be the corresponding relationship between the first motion parameter and the second motion parameter. Wherein the type of the first motion parameter may be different from the type of its corresponding second motion parameter. For example, the first virtual part is a head, the motion state of the head is rotation, the first motion parameter includes a rotation parameter of the head, the corresponding second virtual part is a tail, the motion state of the tail is swing, and the corresponding second motion parameter includes a swing parameter of the tail.
The degree of change of the first motion parameter and the degree of change of the second motion parameter may be different, that is, the binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be a proportional relationship between the degree of change of the first motion parameter and the degree of change of the second motion parameter. The degree of change can be rotation angle, opening and closing size, swing amplitude, scaling size and the like. For example, the first virtual part is an eyeball, the motion state of the eyeball is rotation, the corresponding second virtual part is a trunk, the motion state of the trunk is swinging, the degree of change of the first motion parameter is that the eyeball rotates by a first angle (such as 90 degrees clockwise), and the degree of change of the corresponding second motion parameter is that the trunk swings by a first amplitude (such as 30 degrees rightwards); when the degree of change of the first motion parameter is that the eyeball rotates by a second angle (such as 45 degrees anticlockwise), the corresponding degree of change of the second motion parameter is that the trunk swings by a second amplitude (such as 15 degrees leftwards).
The frequency of the change of the first motion parameter and the frequency of the change of the second motion parameter may be different, that is, the binding relationship between the motion state of the first virtual part and the motion state of the corresponding second virtual part may be the corresponding relationship between the frequency of the change of the first motion parameter and the frequency of the change of the second motion parameter. The change frequency can be rotation times, opening and closing times, swinging times, scaling times and the like in preset time (such as unit time). For example, the first virtual part is an eye, the movement state of the eye is open and close (blinks), the corresponding second virtual part is a tail, and the movement state of the tail is swaying. When the first motion parameter is that the frequency of eye opening and closing (blinking) is five times per minute (i.e., blinking five times within one minute), the frequency of tail rocking is once per minute (i.e., rocking tail once per minute).
The first virtual location may include at least one location(s) of the virtual object, the second virtual location may include at least one location of the virtual object, and each location in the first virtual location corresponds to at least one location in the second virtual location. The different first virtual locations, their associated second virtual locations may be the same or different. In addition, the second virtual location may be a location where the user does not exist, or is not captured by the motion capture device and/or the face capture device, or is not desired to be synchronized by the user. For example, the user preset portion is a user's head, the first virtual portion is a virtual object's head, and the second virtual portion is a virtual object's torso and limbs; the preset part of the user is the face (at least comprising one of the five sense organs) of the user, the first virtual part is the face (at least comprising one of the five sense organs) of the virtual object, and the second virtual part is the ear, tail and/or wing of the virtual object. The second virtual portion may be another portion, which is not particularly limited herein.
When the first virtual part of the virtual object moves, determining a second virtual part related to the first virtual part according to a preset association relation, so as to control the movement of the second virtual part according to the movement state of the first virtual part, so that the second virtual part and the first virtual part move simultaneously, and the harmony and flexibility of the movement are ensured. For example, the head of the virtual object deflects, while the torso of the virtual object deflects; the left eye of the virtual object blinks and the left ear of the virtual object beats.
The three-dimensional model and the two-dimensional model constructed by the virtual object respectively comprise a set of basic bones, each set of basic bones also respectively comprise a second bone corresponding to a second virtual part, and the second bone can comprise one basic bone or a plurality of basic bones. The number of underlying bones in the second, different virtual location, corresponding second bone may be different. The present embodiment achieves movement of the second virtual part by controlling movement of the second bone.
Specifically, the controlling, in step 103, the movement of the second virtual part of the virtual object according to the second association relationship between the first virtual part and the second virtual part of the virtual object includes: determining bone data of a second bone according to the bone data of the first bone and a preset association relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part; and controlling the movement of a second virtual part of the virtual object according to the bone data of the second bone.
The association relation between the first skeleton corresponding to the first virtual part and the second skeleton corresponding to the second virtual part, namely the association relation between the first skeleton and the second skeleton, is preset, so that the second association relation between the first virtual part and the second virtual part of the virtual object is established. After the bone data of the first bone is determined, the bone data of the first bone is used for controlling the first bone action, and meanwhile, a second bone related to the first bone is determined according to the association relation, and the bone data of the second bone is determined according to the bone data of the first bone. The conversion relation between the bone data of the first bone and the bone data of the second bone can be preset, so that after the bone data of the first bone is determined, the bone data of the second bone can be rapidly determined according to the conversion relation. The second skeleton data is input to the second skeleton, so that the second skeleton can act, and the movement of the second skeleton can realize the movement of the second virtual part of the virtual object.
The bone data of the second bone includes at least one of: rotation data, scaling data, movement data. The rotation data may be a rotation angle according to which the second bone may perform a rotation operation, and the second virtual part may be rotated according to the rotation operation of the second bone; the scaling data may be a scaling ratio according to which the second bone may perform a scaling action, and the second virtual part may be scaled according to the scaling action of the second bone; the movement data may be a movement displacement, the second bone may perform a movement motion according to the movement displacement, and the second virtual part may move according to the movement motion of the second bone.
The bone data of the first bone may be of a different data type than the bone data of the second bone, for example, the bone data of the first bone may be movement data, the bone data of the second bone may be rotation data, the bone data of the first bone may be rotation data, and the bone data of the second bone may be rotation data and movement data.
The rotation data, scaling data and/or movement data of the first bone may control the movement state of the first virtual part and the rotation data, scaling data and/or movement data of the second bone may control the movement state of the second virtual part. Through presetting the conversion relation between the rotation data, the scaling data and/or the movement data of the first bone and the rotation data, the scaling data and/or the movement data of the second bone, the binding relation between the motion parameters of the first virtual part and the motion parameters of the second virtual part can be determined, so that when the motion parameters of the first virtual part change, the motion parameters of the second virtual part can be driven to change, thereby realizing the motion of the first virtual part and driving the motion of the second virtual part.
For example, the first virtual part of the virtual object is the head of the virtual object, the first bone corresponding to the head of the virtual object is the head bone, and the head bone is associated with the trunk bone and the limb bone, the second virtual part of the virtual object is determined to be the trunk and the limb (limb) of the virtual object, the head of the virtual object is deflected leftwards, and the trunk and the limb of the virtual object are controlled to deflect leftwards according to the binding relation between the deflection angle of the head of the virtual object and the deflection angle of the trunk and the limb of the virtual object, but the deflection angle of the trunk and the limb is smaller than the deflection angle of the head, as shown in fig. 6 a. Likewise, when the head of the virtual object is reclined, the trunk of the virtual object is controlled to be slightly reclined according to the binding relation between the head back elevation angle of the virtual object and the trunk back elevation angle of the virtual object, as shown in fig. 6 b; when the head of the virtual object tilts forward, the trunk is controlled to slightly tilt forward according to the binding relation between the head tilting angle of the virtual object and the trunk tilting angle of the virtual object, as shown in fig. 6 c; when the head of the virtual object deflects leftwards, the trunk is controlled to slightly incline leftwards according to the binding relation between the leftwards deflection angle of the head of the virtual object and the leftwards deflection angle of the trunk of the virtual object, as shown in fig. 6 d.
It should be noted that, the motion amplitude of the trunk and the limbs is generally smaller than the motion amplitude of the head, and the ratio of the motion amplitude of the head to the motion amplitude of the trunk and the limbs may be 3:1. for example, the range of the amplitude of the head left-right movement is-30 to 30, -30 refers to the extreme position of the head left deflection, 30 refers to the extreme position of the head right deflection, and the range of the amplitude of the trunk and limbs left-right movement is-10 to 10.
For another example, the first virtual part of the virtual object is a binocular eyelid of the virtual object, the first bones corresponding to the binocular eyelid of the virtual object are eyelid bones (including a left eyelid bone and a right eyelid bone), the left eyelid bone is associated with a left ear bone, the right eyelid bone is associated with a right ear bone, and the second virtual part of the virtual object is determined to be both ears of the virtual object. The eyes of the virtual object are normally opened, and the ears of the virtual object are normally erected as shown in fig. 7 a; left eyelid movement of the virtual object (left eye blinks) and left ear of the virtual object bends downward as shown in fig. 7 b; right eyelid of the virtual object moves up and down (right eye blinks), and right ear of the virtual object bends downward as shown in fig. 7 c; the eyelid of the eyes of the virtual object moves downwards (the eyes are closed), and the ears of the virtual object bend, as shown in fig. 7 d; the left eyelid of the virtual object continues to move upward relative to the normal open eye (left eye open to limit) and the left ear of the virtual object is vertical to limit as shown in fig. 7 e; the binocular eyelid of the virtual object continues to move upward relative to the normal open eyes (both eyes open to the limit) and both ears of the virtual object are vertical to the limit as shown in fig. 7 f.
It should be noted that, the left ear bone and the right ear bone may each include a plurality of basic bones, for example, the left ear bone includes a parent bone and two child bones, and the left eyelid bone is associated with the three basic bones so as to control the movements of the three basic bones, so as to ensure that the movement of the left ear is more flexible, rather than stiff movement.
In some embodiments, movement of a first virtual site necessarily triggers movement of its associated second virtual site. In other embodiments, random parameters may be set such that movement of a first virtual location randomly triggers movement of its associated second virtual location, i.e., movement of the first virtual location, sometimes triggers movement of its associated second virtual location, sometimes does not trigger movement of its associated second virtual location.
Specifically, the controlling, in step 103, the movement of the second virtual part of the virtual object according to the preset association relationship between the first virtual part and the second virtual part of the virtual object includes: detecting the number of movements of the first virtual part; and when the first virtual part moves for a preset number of times, controlling the second virtual part of the virtual object to move according to the preset association relation between the first virtual part and the second virtual part of the virtual object.
When the movement of the first virtual part is frequent, visual fatigue is caused if the associated second virtual part moves all the time, so that the embodiment sets a preset number of times, the movement of the second virtual part is triggered only when the first virtual part moves for the preset number of times, and the movement of the second virtual part is not triggered if the number of times of the movement of the first virtual part does not reach the preset number of times.
For example, the first virtual part is the binocular eyelid of the virtual object, which frequently moves up and down (caused by frequent blinking of the user), and the second virtual part is the binaural of the virtual object. If the ears of the virtual object move all the time when the eyes blink, visual fatigue can be caused, and therefore the preset times are set to be 3 to 5 times, namely, each time the eyes of the virtual object blink 3 to 5 times, the ears of the virtual object jump once, so that the interestingness is increased.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
According to the virtual object control method, action data of the preset part of the user is obtained; controlling the movement of a first virtual part of the virtual object according to the action data and the association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the association relation between the first virtual part and the second virtual part of the virtual object. According to the method and the device for controlling the movement of the virtual object, when the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the association relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of a plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and the flexibility of the movement of the virtual object are improved, and the display effect is enriched.
Referring to fig. 8, fig. 8 is another flow chart of a control method of a virtual object according to an embodiment of the present application. The specific flow of the method can be as follows:
in step 201, a virtual model is constructed for a virtual object, the virtual object comprising a first virtual part and a second virtual part, the virtual model comprising a first skeleton corresponding to the first virtual part and a second skeleton corresponding to the second virtual part.
For example, virtual models of virtual objects are made by adopting software such as Live2D or 3D, CG, so that the virtual models can be three-dimensional models or two-dimensional models, virtual objects constructed by the three-dimensional models are three-dimensional virtual objects, and virtual objects constructed by the two-dimensional models are two-dimensional virtual objects.
Step 202, setting an association relationship between a preset part of a user and a first bone corresponding to a first virtual part.
The user preset portion and the first virtual portion may be the same portion, for example, the user preset portion is a mouth of the user, and the first virtual portion is a mouth of the virtual object.
Step 203, setting an association relationship between a first bone corresponding to the first virtual part and a second bone corresponding to the second virtual part.
The first virtual location is a different location than the second virtual location, for example, the first virtual location is the mouth of the virtual object and the second virtual location is the tail of the virtual object.
Step 204, obtaining action data of a preset part of the user.
For example, motion data of the user's mouth, such as the degree of opening of the user's mouth, is obtained.
Step 205, determining bone data of the first bone according to the motion data and the association relationship between the preset position of the user and the first bone corresponding to the first virtual position.
For example, bone data of a mouth bone corresponding to the mouth of the virtual object is determined according to the degree of opening of the mouth of the user.
Step 206, controlling the first virtual part movement of the virtual object according to the bone data of the first bone.
For example, according to bone data of bones of the mouth, the mouth of the virtual object is controlled to move synchronously with the mouth of the user, i.e. the mouth of the virtual object is opened to the same extent as the mouth of the user.
Step 207, determining bone data of the second bone according to the bone data of the first bone and the association relationship between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part.
For example, the bone data of the tail bone corresponding to the tail of the virtual object is determined according to the bone data of the mouth bone corresponding to the mouth of the virtual object and the conversion relation between the bone data of the mouth bone and the bone data of the tail bone.
Step 208, controlling the second virtual part motion of the virtual object according to the bone data of the second bone.
For example, according to the skeletal data of the tail skeleton, the tail swing of the virtual object is controlled, so that the mouth and tail movement of the virtual object can be simultaneously controlled by acquiring the action data of the mouth of the user, and the harmony and flexibility of the movement of the virtual object are improved.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
According to the control method for the virtual object, when the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the association relation between the first virtual part and the second virtual part in the preset virtual object, namely, the movement of a plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and the flexibility of the movement of the virtual object are improved, and the expression effect is enriched.
In order to facilitate better implementation of the virtual object control method in the embodiment of the present application, the embodiment of the present application further provides a virtual object control device. Referring to fig. 9, fig. 9 is a schematic structural diagram of a control device for a virtual object according to an embodiment of the present application. The control apparatus 300 of the virtual object may include:
An obtaining module 301, configured to obtain action data of a preset portion of a user;
the first control module 302 is configured to control a first virtual part of the virtual object to move according to the motion data and a first association relationship between the user preset part and the first virtual part of the virtual object;
and the second control module 303 is configured to control movement of a second virtual part of the virtual object according to a second association relationship between the first virtual part and the second virtual part of the virtual object.
In this embodiment, the motion data of the preset portion of the user may be collected by setting the motion capture device and/or the face capture device at the user, and after the motion capture device and/or the face capture device collect the motion data of the preset portion of the user, the motion data is transmitted to the electronic device, so that the electronic device obtains the motion data of the preset portion of the user. The user preset parts refer to key parts of the user, may include one or more parts, and generally do not include all parts of the user, for example, the user preset parts may be the head of the user, the face of the user, and the like. The motion data refers to motion parameters corresponding to real-time motion of a preset part of a user, such as deflection angle of the head of the user, opening and closing degree of eyes of the user, and the like.
The association relationship between each part of the user and each virtual part of the virtual object is established in advance, and since only the action data of the preset part of the user is acquired in step 101, the association relationship between the preset part of the user and the first virtual part of the virtual object may be established. The first virtual part refers to a part of the virtual object, which is the same as a preset part of a user, and the first virtual part can comprise one or more parts of the virtual object.
After the action data of the preset part of the user is obtained, determining a first virtual part of the virtual object associated with the preset part of the user according to the preset association relation, and controlling the movement of the first virtual part of the virtual object according to the action data so as to ensure that the virtual object moves along with the user.
In step 101, only the action data of the preset part of the user is acquired, so that only the association relationship between the first virtual part associated with the preset part of the user and the other virtual parts (i.e., the second virtual part) may be established. Wherein the second virtual location may comprise one or more locations of the virtual object, the different first virtual locations, and the associated second virtual locations may be the same or different.
When the first virtual part of the virtual object moves, determining a second virtual part related to the first virtual part according to a preset association relation, so as to control the movement of the second virtual part according to the movement state of the first virtual part, so that the second virtual part and the first virtual part move simultaneously, and the harmony and flexibility of the movement are ensured.
Optionally, the first association relationship is a one-to-one correspondence relationship between the user preset part and a first virtual part of the virtual object;
the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part.
Optionally, the motion state of the first virtual part includes a first motion parameter, and the first motion parameter includes at least one of the following: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters; the motion state of the second virtual part comprises a second motion parameter, and the second motion parameter comprises at least one of the following: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters;
The binding relation is the corresponding relation between the first motion parameter and the second motion parameter.
Optionally, the binding relationship is a proportional relationship between the degree of change of the first motion parameter and the degree of change of the second motion parameter.
Optionally, the binding relationship is a correspondence relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.
Optionally, the virtual object is constructed with a virtual model, and the virtual model comprises a first skeleton corresponding to the first virtual part;
the first control module 302 is further configured to:
determining bone data of a first bone corresponding to the first virtual part according to the action data and the association relation between the preset part of the user and the first bone;
and controlling the movement of the first virtual part of the virtual object according to the bone data of the first bone.
Optionally, the virtual model further comprises a second bone corresponding to the second virtual location;
the second control module 303 is further configured to:
determining bone data of a second bone according to the bone data of the first bone and the association relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;
And controlling the movement of a second virtual part of the virtual object according to the bone data of the second bone.
Optionally, the bone data comprises at least one of: rotation data, scaling data, movement data.
Optionally, the second control module 303 is further configured to:
detecting the number of movements of the first virtual part;
and when the first virtual part moves for a preset number of times, controlling the second virtual part of the virtual object to move according to a second association relation between the first virtual part and the second virtual part of the virtual object.
Optionally, the virtual model includes at least one of: three-dimensional model, two-dimensional model.
Optionally, the preset portion is a head, the first virtual portion is a head, and the second virtual portion includes at least one of the following: trunk, limbs, ears, tails, wings.
Optionally, the preset portion and the first virtual portion are one of five sense organs, and the second virtual portion includes at least one of the following: a torso, limbs, tails, wings of the five sense organs different from the first virtual location.
All the above technical solutions may be combined to form an optional embodiment of the present application, which is not described here in detail.
The control device of the virtual object provided by the embodiment of the application acquires action data of a preset part of a user; controlling the movement of a first virtual part of the virtual object according to the action data and the association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the association relation between the first virtual part and the second virtual part of the virtual object. According to the method and the device for controlling the movement of the virtual object, when the first virtual part of the virtual object moves along with the preset part of the user, the second virtual part is controlled to move simultaneously according to the association relation between the first virtual part and the second virtual part in the virtual object, namely, the movement of a plurality of virtual parts of the virtual object can be controlled only by acquiring the action data of the preset part of the user, the harmony and the flexibility of the movement of the virtual object are improved, and the display effect is enriched.
Correspondingly, the embodiment of the application also provides electronic equipment, which can be a terminal or a server, wherein the terminal can be a terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 10. The electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device 400 using various interfaces and lines, and performs various functions of the electronic device 400 and processes data by running or loading software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device 400.
In the embodiment of the present application, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
acquiring action data of a preset part of a user; controlling the movement of a first virtual part of the virtual object according to the action data and the association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the association relation between the first virtual part and the second virtual part of the virtual object.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 10, the electronic device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 10 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
In the embodiment of the application, the animation generation software is executed by the processor 401 to generate a graphical user interface on the touch display screen 403. The touch display 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communication with a network device or other electronic device via wireless communication.
The audio circuitry 405 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to e.g. another electronic device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the electronic device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 10, the electronic device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the virtual object control methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring action data of a preset part of a user; controlling the movement of a first virtual part of the virtual object according to the action data and the association relation between the preset part of the user and the first virtual part of the virtual object; and controlling the movement of the second virtual part of the virtual object according to the association relation between the first virtual part and the second virtual part of the virtual object.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any one of the virtual object control methods provided in the embodiments of the present application may be executed by the computer program stored in the storage medium, so that the beneficial effects that any one of the virtual object control methods provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not described herein.
The above describes in detail a control method, apparatus, storage medium and electronic device for a virtual object provided in the embodiments of the present application, and specific examples are applied to describe the principles and embodiments of the present application, where the descriptions of the above embodiments are only used to help understand the method and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (15)

1. A method for controlling a virtual object, the method comprising:
acquiring action data of a preset part of a user;
controlling the movement of a first virtual part of the virtual object according to the action data and the first association relation between the preset part of the user and the first virtual part of the virtual object;
controlling the movement of a second virtual part of the virtual object according to a second association relation between the first virtual part and the second virtual part of the virtual object; the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part; the first virtual location and the second virtual location are different virtual locations.
2. The method of claim 1, wherein the first association is a one-to-one correspondence between the user preset location and a first virtual location of the virtual object.
3. The method of claim 1, wherein the motion state of the first virtual part comprises a first motion parameter, the first motion parameter comprising at least one of: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters; the motion state of the second virtual part comprises a second motion parameter, and the second motion parameter comprises at least one of the following: rotation parameters, opening and closing parameters, swinging parameters and scaling parameters;
the binding relation is the corresponding relation between the first motion parameter and the second motion parameter.
4. The method for controlling a virtual object according to claim 3, wherein the binding relationship is a proportional relationship between a degree of change of the first motion parameter and a degree of change of the second motion parameter.
5. The method for controlling a virtual object according to claim 3, wherein the binding relationship is a correspondence relationship between a change frequency of the first motion parameter and a change frequency of the second motion parameter.
6. The method of controlling a virtual object according to claim 1, wherein the virtual object is constructed with a virtual model including a first skeleton corresponding to the first virtual part;
the controlling the movement of the first virtual part of the virtual object according to the action data and the first association relation between the user preset part and the first virtual part of the virtual object comprises the following steps:
determining bone data of a first bone corresponding to the first virtual part according to the action data and the association relation between the preset part of the user and the first bone;
and controlling the movement of the first virtual part of the virtual object according to the bone data of the first bone.
7. The method of controlling a virtual object according to claim 6, wherein the virtual model further includes a second skeleton corresponding to the second virtual part;
the controlling the movement of the second virtual part of the virtual object according to the second association relation between the first virtual part and the second virtual part of the virtual object comprises the following steps:
determining bone data of a second bone according to the bone data of the first bone and the association relation between the first bone corresponding to the first virtual part and the second bone corresponding to the second virtual part;
And controlling the movement of a second virtual part of the virtual object according to the bone data of the second bone.
8. The method of claim 6 or 7, wherein the skeletal data comprises at least one of: rotation data, scaling data, movement data.
9. The method for controlling a virtual object according to claim 1, wherein controlling the movement of the second virtual part of the virtual object according to the second association relationship between the first virtual part and the second virtual part of the virtual object comprises:
detecting the number of movements of the first virtual part;
and when the first virtual part moves for a preset number of times, controlling the second virtual part of the virtual object to move according to a second association relation between the first virtual part and the second virtual part of the virtual object.
10. The method of controlling a virtual object according to claim 6, wherein the virtual model includes at least one of: three-dimensional model, two-dimensional model.
11. The method for controlling a virtual object according to claim 1, wherein the preset portion is a head, the first virtual portion is a head, and the second virtual portion includes at least one of: trunk, limbs, ears, tails, wings.
12. The method of claim 1, wherein the predetermined location and the first virtual location are one of five sense organs, and the second virtual location includes at least one of: one of the five sense organs, torso, limbs, tail, wings different from the first virtual location.
13. A control apparatus for a virtual object, the apparatus comprising:
the acquisition module is used for acquiring action data of a preset part of a user;
the first control module is used for controlling the movement of a first virtual part of the virtual object according to the action data and the first association relation between the preset part of the user and the first virtual part of the virtual object;
the second control module is used for controlling the movement of a second virtual part of the virtual object according to a second association relation between the first virtual part and the second virtual part of the virtual object; the second association relationship is a corresponding relationship between a first virtual part and a second virtual part of the virtual object, and a binding relationship between a motion state of the first virtual part and a motion state of the corresponding second virtual part; the first virtual location and the second virtual location are different virtual locations.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which is adapted to be loaded by a processor for performing the steps in the method of controlling a virtual object according to any of claims 1-12.
15. An electronic device comprising a memory in which a computer program is stored and a processor that performs the steps in the method of controlling a virtual object according to any one of claims 1-12 by calling the computer program stored in the memory.
CN202110839197.8A 2021-07-23 2021-07-23 Virtual object control method and device, storage medium and electronic equipment Active CN113546420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110839197.8A CN113546420B (en) 2021-07-23 2021-07-23 Virtual object control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110839197.8A CN113546420B (en) 2021-07-23 2021-07-23 Virtual object control method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113546420A CN113546420A (en) 2021-10-26
CN113546420B true CN113546420B (en) 2024-04-09

Family

ID=78132702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110839197.8A Active CN113546420B (en) 2021-07-23 2021-07-23 Virtual object control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113546420B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114699770A (en) * 2022-04-19 2022-07-05 北京字跳网络技术有限公司 Method and device for controlling motion of virtual object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767482A (en) * 2019-01-09 2019-05-17 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN109816773A (en) * 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait
CN111598987A (en) * 2020-05-18 2020-08-28 网易(杭州)网络有限公司 Bone processing method, device, equipment and storage medium of virtual object
CN112233211A (en) * 2020-11-03 2021-01-15 网易(杭州)网络有限公司 Animation production method and device, storage medium and computer equipment
CN112241203A (en) * 2020-10-21 2021-01-19 广州博冠信息科技有限公司 Control device and method for three-dimensional virtual character, storage medium and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111247B (en) * 2019-05-15 2022-06-24 浙江商汤科技开发有限公司 Face deformation processing method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816773A (en) * 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait
CN109767482A (en) * 2019-01-09 2019-05-17 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111598987A (en) * 2020-05-18 2020-08-28 网易(杭州)网络有限公司 Bone processing method, device, equipment and storage medium of virtual object
CN112241203A (en) * 2020-10-21 2021-01-19 广州博冠信息科技有限公司 Control device and method for three-dimensional virtual character, storage medium and electronic device
CN112233211A (en) * 2020-11-03 2021-01-15 网易(杭州)网络有限公司 Animation production method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN113546420A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
US10609334B2 (en) Group video communication method and network device
GB2556347B (en) Virtual Reality
US20230123433A1 (en) Artificial intelligence-based animation character drive method and related apparatus
CN108933723B (en) Message display method and device and terminal
KR102491140B1 (en) Method and apparatus for generating virtual avatar
CN110555507B (en) Interaction method and device for virtual robot, electronic equipment and storage medium
US20230419582A1 (en) Virtual object display method and apparatus, electronic device, and medium
CN112233211B (en) Animation production method, device, storage medium and computer equipment
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
WO2022252866A1 (en) Interaction processing method and apparatus, terminal and medium
CN108876878B (en) Head portrait generation method and device
CN114787759A (en) Communication support program, communication support method, communication support system, terminal device, and non-language expression program
WO2023044151A1 (en) Deforming real-world object using an external mesh
CN113546420B (en) Virtual object control method and device, storage medium and electronic equipment
CN113426129B (en) Method, device, terminal and storage medium for adjusting appearance of custom roles
CN117563227A (en) Virtual character face control method, device, computer equipment and storage medium
Fu et al. Real-time multimodal human–avatar interaction
CN115526967A (en) Animation generation method and device for virtual model, computer equipment and storage medium
CN117058284A (en) Image generation method, device and computer readable storage medium
CN113345059B (en) Animation generation method and device, storage medium and electronic equipment
CN113362435B (en) Virtual component change method, device, equipment and medium of virtual object model
US11980818B2 (en) Game system, processing method, and information storage medium
CN112435318A (en) Anti-threading method and device in game, electronic equipment and storage medium
CN106200923A (en) The control method of a kind of virtual reality system and device
CN113350801B (en) Model processing method, device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant