CN110941337A - Control method of avatar, terminal device and computer readable storage medium - Google Patents

Control method of avatar, terminal device and computer readable storage medium Download PDF

Info

Publication number
CN110941337A
CN110941337A CN201911169380.0A CN201911169380A CN110941337A CN 110941337 A CN110941337 A CN 110941337A CN 201911169380 A CN201911169380 A CN 201911169380A CN 110941337 A CN110941337 A CN 110941337A
Authority
CN
China
Prior art keywords
virtual object
position information
control method
virtual
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911169380.0A
Other languages
Chinese (zh)
Inventor
史亚男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN201911169380.0A priority Critical patent/CN110941337A/en
Publication of CN110941337A publication Critical patent/CN110941337A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a control method of an avatar, a terminal device and a computer readable storage medium, the control method of the avatar provided by the invention comprises the following steps: acquiring first position information of a gesture of a user in a virtual scene; and acquiring second position information of the virtual object in the virtual scene, and controlling the virtual object according to the first position information and the second position information. The position information of the user in the virtual scene and the position information of the virtual object are combined, and the change of the position coordinate information is obtained, so that the control of the virtual object in the three-dimensional space is realized, and the AR interaction is more natural and the achievable operation mode is more diversified.

Description

Control method of avatar, terminal device and computer readable storage medium
Technical Field
The present invention relates to the field of virtual reality, and in particular, to a method for controlling an avatar, a terminal device, and a computer-readable storage medium.
Background
Under the support of a computer vision technology, the depth recognition capability of a machine vision technology is applied, for example, a mobile phone camera is flexibly applied to automatically recognize an object and show the object with a virtual reality (AR) effect, and different gestures are recognized by means of an existing processor and the camera on a mobile phone to make corresponding click actions, so that the method becomes a main mode of current mobile phone AR application. The method for controlling the operations of moving, enlarging, reducing and the like of the virtual object through screen clicking and dragging and a screen virtual rocker is a common method for interacting with the AR virtual object at present. However, in this way, the user cannot control the position coordinates of the real scene, and the operation of the virtual object by the user is limited within the screen size range, and cannot naturally interact with the object in the three-dimensional space.
Disclosure of Invention
The invention mainly aims to provide a control method of an avatar, and aims to solve the problem that a user cannot naturally interact with an object in a three-dimensional space in the conventional AR interaction technology.
In order to achieve the above object, the present invention provides a method for controlling an avatar, the method comprising the steps of:
acquiring first position information of a gesture of a user in a virtual scene;
acquiring second position information of a virtual object in the virtual scene;
and controlling the virtual object according to the first position information and the second position information.
Optionally, the step of controlling the virtual object according to the first position information and the second position information includes:
acquiring a motion track of the gesture according to the first position information of the gesture;
acquiring track parameters of the motion track;
and controlling the virtual object according to the track parameter and the second position information.
Optionally, the step of controlling the virtual object according to the trajectory parameter and the second position parameter includes:
determining corresponding operation according to the track parameter and the second position information;
controlling the virtual object according to the operation, the operation including at least one of moving, zooming, and rotating.
Optionally, the step of controlling the virtual object according to the first position information and the second position information further includes:
acquiring a touch position parameter of the gesture according to the first position information of the gesture;
and controlling the virtual object according to the touch position parameter and the second position information.
Optionally, before the step of obtaining the first position information of the gesture of the user in the virtual scene, the method further includes:
acquiring a real-time scene image shot by a camera;
taking the real-time scene image as a background of the virtual scene;
placing the virtual object in the virtual scene.
Optionally, the method for controlling the avatar further includes:
acquiring a position change parameter of the terminal equipment, wherein the position change parameter comprises at least one of a unique parameter and a rotation parameter;
and controlling the virtual object to rotate or move according to the position change parameter.
Optionally, the step of controlling the virtual object to rotate or move according to the position variation parameter includes:
and adjusting the position and the posture of the virtual object according to the position change parameter and the second position information of the virtual object.
To achieve the above object, the present invention also proposes a terminal device comprising a memory, a processor, and a control program of an avatar control method stored on the memory and executable on the processor, the control program of the avatar control method implementing the steps of the avatar control method as described above when executed by the processor.
To achieve the above object, the present invention also proposes a computer readable storage medium having stored thereon a control program of an avatar control method, the control program of the avatar control method implementing the steps of the avatar control method as described above when executed by a processor.
According to the technical scheme, first position information of a gesture of a user in a virtual scene is acquired, second position information of a virtual object in the virtual scene is acquired, and the virtual object is controlled according to the first position information and the second position information. The position information of the user in the virtual scene and the position information of the virtual object are combined, and the change of the position coordinate information is obtained, so that the control of the virtual object in the three-dimensional space is realized, and the AR interaction is more natural and the achievable operation mode is more diversified.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a control method of an avatar according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a control method of an avatar according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a control method for an avatar according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a control method for an avatar according to a fourth embodiment of the present invention;
FIG. 6 is a flowchart illustrating a fifth embodiment of the avatar control method according to the present invention;
FIG. 7 is a flowchart illustrating a control method for an avatar according to a sixth embodiment of the present invention;
fig. 8 is a flowchart illustrating a control method of an avatar according to a seventh embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back) are involved in the embodiment of the present invention, the directional indications are only used for explaining the relative positional relationship, the motion situation, and the like between the components in a certain posture, and if the certain posture is changed, the directional indications are changed accordingly.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring first position information of a gesture of a user in a virtual scene; acquiring second position information of a virtual object in the virtual scene; and controlling the virtual object according to the first position information and the second position information.
Since the operation of the virtual object by the user is limited within the screen size range in the prior art, the virtual object cannot naturally interact with the object in the three-dimensional space.
The invention provides a control method of an avatar, which comprises the steps of acquiring first position information of a gesture of a user in a virtual scene; acquiring second position information of a virtual object in the virtual scene; and controlling the virtual object according to the first position information and the second position information, so that the problem that a user cannot naturally interact with an object in a three-dimensional space in the conventional AR interaction technology is solved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a smart phone and can also be a mobile intelligent terminal such as a tablet personal computer.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard priority interface, a wireless interface (e.g., a WiFi interface). The memory 1005 may be a high-speed RAM memory, or may be a non-volatile memory (NVM), such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer-readable storage medium, may include a program of an operating system, a network communication module, and a control method of an avatar.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side), acquiring position information of each object in a virtual scene, and performing data communication with the client; and the processor 1001 may be used to call a program of a control method of the avatar stored in the memory 1005 and perform the following operations:
acquiring first position information of a gesture of a user in a virtual scene;
acquiring second position information of a virtual object in the virtual scene;
and controlling the virtual object according to the first position information and the second position information.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
acquiring a motion track of the gesture according to the first position information of the gesture;
acquiring track parameters of the motion track;
and controlling the virtual object according to the track parameter and the second position information.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
determining corresponding operation according to the track parameter and the second position information;
and controlling the virtual object according to the operation.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
acquiring a touch position parameter of the gesture according to the first position information of the gesture;
and controlling the virtual object according to the touch position parameter and the second position information.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
acquiring a real-time scene image shot by a camera;
taking the real-time scene image as a background of the virtual scene;
placing the virtual object in the virtual scene.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
acquiring a position change parameter of the terminal equipment, wherein the position change parameter comprises at least one of a unique parameter and a rotation parameter;
and controlling the virtual object to rotate or move according to the position change parameter.
Further, the processor 1001 may call a program of a control method of the avatar stored in the memory 1005, and also perform the following operations:
and adjusting the position and the posture of the virtual object according to the position change parameter and the second position information of the virtual object.
Based on the hardware architecture, the embodiment of the control method of the air conditioner is provided.
Referring to fig. 2, fig. 2 is a first embodiment of a control method of an avatar of the present invention, the control method of the avatar including the steps of:
step S10, acquiring first position information of a gesture of a user in a virtual scene;
in this embodiment, a camera of a smart phone or a mobile terminal device is used for shooting a user gesture motion video in a virtual scene, and an OpenCV library (open source computer vision library) is used for performing data recognition and analysis on the collected user gesture motion video, and the collected user gesture motion video is combined with a virtual coordinate system in the virtual scene to obtain first position information of a user gesture in the virtual scene. The first position information comprises dynamic position information and static position information of a user gesture, and if the user gesture changes in the virtual scene, a coordinate motion track of the gesture is correspondingly formed by the gesture motion track of the user, namely the dynamic position information of the user gesture; and when the gesture of the user does not dynamically change in the virtual scene, the system acquires the static position information of the gesture of the user.
In this embodiment, each time the user starts the AR software, the system automatically establishes a virtual coordinate system according to the currently acquired scene information, and the virtual coordinate system is not changed after the virtual coordinate system is established.
In this embodiment, each position of the gesture of the user in the virtual scene may trigger the system to acquire a plurality of coordinate points, depending on the number of contact points that can be sensed by the system in the virtual scene by the gesture operation of the user. When the distance between the hand of the user and the virtual object is within the recognizable range, the system acquires the position information of fingers and/or other hand tissues entering the recognizable range as first position information, so that the system can recognize not only the operation based on finger control, but also the control operation of the parts such as a palm, a knuckle and the like on the virtual object.
In this embodiment, when the user gesture is close to the virtual object for the first time in the virtual scene, the system judges whether the user gesture can control the virtual object through the preset relative distance, and specifically, when the preset relative distance is 5mm, the distance between the point of the user gesture closest to the virtual object and the virtual object is less than or equal to 5mm, the user gesture enters the recognizable range, and the system automatically acquires the first position information of the gesture within the recognizable range. In this embodiment, the value of the preset relative distance is not limited to 5mm, and may be other values.
Step S20, second position information of the virtual object in the virtual scene is obtained;
in this embodiment, the system automatically obtains the position information of the virtual object in the virtual scene as the second position information, the virtual object in the virtual scene is drawn by using an OpenGL library (open graphics library), and the drawn virtual object has corresponding position information.
In this embodiment, the position information of the virtual object has corresponding coordinate information as the first position information of the user gesture in the virtual scene, and each virtual object has a plurality of coordinate position information capable of representing a specific image thereof in the virtual scene, and the coordinate position information is used as the second position information.
Step S30, controlling the virtual object according to the first position information and the second position information.
In this embodiment, the system fuses the acquired first position information representing the gesture of the user in the virtual scene and the second position information of the virtual object, and obtains a corresponding control operation to control the virtual object.
In this embodiment, the system may determine whether the gesture enters an identifiable range in which the virtual object can be controlled according to the coordinate position information of the virtual object and the coordinate position information of the gesture, further identify a control operation corresponding to the current gesture of the user according to the coordinate position information of the gesture, and then correspondingly adjust the coordinate of the virtual object according to the coordinate position information of the gesture, thereby implementing the control of the user on the virtual object. In this embodiment, the coordinate position information of the user gesture in the virtual scene includes an initial coordinate corresponding to each identifiable point and a vector change parameter in a coordinate change process, and the coordinate position information of the virtual object includes an initial coordinate of each feature point of the virtual object and a vector change parameter in a coordinate change process. For the convenience of distinguishing, the vector change parameter of the user gesture in the coordinate change process in the virtual scene is defined as a first vector change parameter, the vector change parameter of the virtual object in the coordinate change process is defined as a second vector change parameter, the first vector change parameter is equal to the second vector change parameter, and the coordinate of the feature point of the virtual object is adjusted according to the first vector change parameter of the corresponding recognizable point of the user gesture in the virtual scene.
In this embodiment, each recognizable point of the user gesture in the virtual scene corresponds to one feature point of the virtual object, each virtual object has a corresponding number of feature points, and the feature points are used to determine coordinate position information of the virtual object and enable the system to control the virtual object by moving the positions of the feature points. The number of the feature points of the virtual object is larger than the recognizable points of the user gesture in the virtual scene, so that each recognizable point of the user gesture in the virtual scene can correspond to the feature point of one virtual object. Because the number of the characteristic points of the virtual object of the user is not equal to the number of the recognizable points of the gesture of the user in the virtual scene, when the coordinates of the partial characteristic points of the virtual object change, the system automatically adjusts the shape of the virtual object, and controls the overall shape of the virtual object to maintain the original proportion without deformation.
In this embodiment, when the coordinate corresponding to the first position information in the virtual scene of the gesture of the user and the coordinate corresponding to the second position information of the virtual object are less than or equal to the preset relative distance, the feature point of the virtual object moves according to the first vector change parameter of the user gesture recognizable point corresponding to the feature point, and the first position information and the second position information always keep the relative positions unchanged in the moving process.
In this embodiment, the position information of the user in the virtual scene is combined with the position information of the virtual object, and the change of the position coordinate information is utilized to realize the control of the virtual object in the three-dimensional space, so that the AR interaction is more natural and the operation mode that can be realized is more diversified.
Referring to fig. 3, fig. 3 is a second embodiment of the avatar control method of the present invention, based on the first embodiment, the step S30 includes:
step S31, acquiring the motion trail of the gesture according to the first position information of the gesture;
in this embodiment, when the gesture of the user moves in the virtual scene, the system acquires the motion trajectory of the gesture of the user, and analyzes the motion trajectory to obtain a coordinate motion trajectory parameter corresponding to the motion trajectory of the gesture of the user according to the acquired motion trajectory.
Step S32, acquiring the track parameters of the motion track;
in this embodiment, the trajectory parameters of the motion trajectory, that is, the motion trajectory parameters of all coordinate points when the user gesture enters the recognizable range, acquire the coordinate information of all the recognizable points at a time within a preset time interval, and then obtain the first vector change parameters of all the coordinate points, where the first vector change parameters reflect the precise change process of the user gesture in the virtual scene.
Step S33, controlling the virtual object according to the trajectory parameter and the second position information.
In this embodiment, the system acquires the coordinate information of all the identifiable points once in each preset time interval, where the coordinate information includes the coordinate information of the gesture and the coordinate information of the virtual object, and in this process, the coordinate of the virtual object is adjusted according to the coordinate information of the gesture, so that the virtual object moves according to the coordinate motion trajectory of the gesture.
In this embodiment, the smaller the division time interval of the preset time interval is, the more the coordinate information obtained by the system is, the more the coordinate change times of the corresponding virtual object is, so that the higher the response sensitivity of the AR interaction is, the more natural the user operates.
Referring to fig. 4, fig. 4 is a third embodiment of the avatar control method of the present invention, based on the first embodiment, the step S33 includes:
step S33a, determining corresponding operation according to the track parameter and the second position information;
step S33b, controlling the virtual object according to the operation, wherein the operation comprises at least one of moving, zooming and rotating.
In this embodiment, different motion trajectories of a user's gesture correspond to different control operations, the motion trajectories correspond to different trajectory parameters in a virtual coordinate system, when the user wants to move a virtual object, a hand contacts with the virtual object, and then drags the virtual object in a certain direction, the motion corresponds to the virtual coordinate system, the system obtains initial coordinate information of the user's gesture and the virtual object, each initial coordinate of the user's gesture corresponds to a corresponding initial coordinate of the virtual object, the coordinates of the user's gesture move a certain distance to a next coordinate position along a first vector change parameter within a preset time interval, to obtain coordinate information corresponding to the moment, the system moves the virtual object by the same distance according to the vector direction to obtain coordinate information corresponding to the virtual object at the moment, and the overall gesture motion of the user within a certain time is decomposed into gesture motions within a plurality of time intervals by the system, and the system processes the virtual object once in each time interval so that the virtual object completes displacement according to the movement track of the user.
In this embodiment, when a user wants to zoom a virtual object, the hand contacts the virtual object, the fingers converge or diverge toward a certain center, the motion corresponds to a virtual coordinate system, the system obtains initial coordinate information of a user gesture and the virtual object, each initial coordinate of the user gesture corresponds to a corresponding initial coordinate of the virtual object, the coordinates of the user gesture move to the center direction or the opposite direction for a certain distance to a next coordinate position within a preset time interval to obtain coordinate information corresponding to the moment, the system performs the same processing on corresponding feature points of the virtual object according to a first vector change parameter of the gesture to obtain coordinate information corresponding to the moment virtual object, the overall gesture motion of the user within a certain time is decomposed into gesture motions within a plurality of time intervals by the system, and the system performs one processing on the virtual object within each time interval, so that the virtual object is zoomed according to the motion track of the user gesture. When a user wants to rotate a virtual object, the processing mode of the system is the same as the processing mode of moving and zooming, and the difference is that the movement mode of the gesture of the user is different, so that the track of coordinate change is different, and different control effects are presented.
In this embodiment, the coordinates of the user gesture and the coordinates of the virtual object are in one-to-one correspondence, the movement of the user gesture causes the change of the coordinates of the user gesture, and the coordinates of the virtual object change according to the change rule of the coordinates of the user gesture, so that the moving, zooming and rotating operations of the virtual object are realized.
Referring to fig. 5, fig. 5 is a fourth embodiment of the avatar control method of the present invention, based on the first embodiment, the step S30 further includes:
step S34, acquiring a touch position parameter of the gesture according to the first position information of the gesture;
in this embodiment, when the gesture of the user only touches the virtual object in the virtual scene, but does not drag the virtual object to move, the system obtains a touch position of the gesture of the user acting on the virtual object in the virtual scene, where the touch position corresponds to one coordinate in a coordinate system of the virtual scene.
Step S35, controlling the virtual object according to the touch position parameter and the second position information.
In this embodiment, after the system obtains the touch position of the user gesture acting on the virtual object in the virtual scene, the system fuses the corresponding coordinate of the touch position in the virtual scene coordinate system and the coordinate of the virtual object, and determines the initial coordinate and the collision direction of the virtual object when the virtual object collides. The collision direction is determined according to a first vector change parameter between an initial coordinate position and a final coordinate position of a user gesture in an identifiable range, it should be noted that, in the process of touching a virtual object by the user, the user gesture also generates a tiny motion track, the motion track at this time is different from the motion track for zooming, moving and rotating the virtual object, because of the great difference between the gesture motion time and the motion distance, the system can automatically recognize a touch operation, in this embodiment, the motion of touching the virtual object by the user gesture is defined as a static motion, and the system correspondingly acquires static position information. For example, when the index finger of the user clicks on the virtual object, the virtual object generates a collision effect in which the direction of collision pop-up is the same as the direction of position movement of the index finger of the user at the instant of contact with the virtual object.
In the embodiment, the user generates the collision effect by touching the virtual object, so that the operation types of AR interaction are increased, and the interaction effect between the user and the virtual object is further improved.
Referring to fig. 6, fig. 6 is a fifth embodiment of the avatar control method according to the present invention, and based on any of the first to fourth embodiments, the step S10 is preceded by:
step S40, acquiring a real-time scene image shot by a camera;
in the embodiment, when the user uses the AR software, the camera of the mobile terminal device such as the mobile phone is kept in the on state, and the current scene image can be shot in real time.
Step S50, using the real-time scene as the background of the virtual scene;
step S60, placing the virtual object in the virtual scene.
In this embodiment, after the background of the virtual scene is obtained, the user may select a function of automatically synthesizing the AR image using the system, the system invokes the OpenGL library to draw a corresponding virtual object according to the scene information of the virtual background, and the user may also select the corresponding AR image from the AR resource library as the virtual object and place the virtual object in the virtual scene.
In this embodiment, the system uses the scene image shot by the camera in real time as the background of the virtual scene, fuses the virtual object with the real scene, and the virtual background changes with the change of the scene, thereby obtaining a more real and natural interactive experience.
Referring to fig. 7, fig. 7 is a sixth embodiment of the avatar control method according to the present invention, which further includes, based on any one of the first to fifth embodiments:
step S70, obtaining position change parameters of the terminal equipment, wherein the position change parameters comprise at least one of displacement parameters and rotation operations;
in this embodiment, when the position of the terminal device is not fixed, the user holds the mobile terminal to move, and at this time, the scene image shot by the terminal camera changes, and the background of the virtual scene changes, which also causes the position of the virtual object in the virtual scene to change. The terminal equipment is the same as the virtual object, a plurality of recognizable feature points are arranged in the virtual coordinate system, for the convenience of distinguishing, the feature points of the terminal equipment in the virtual group violent games are named as operation points, the number of the operation points of the terminal equipment is the same as that of the feature points of the virtual object, the operation points change along with the change of the number of the feature points of the virtual object, and each operation point corresponds to one feature point. The system acquires a position change parameter of the terminal device in a virtual coordinate system, wherein the position change parameter comprises at least one of a displacement parameter and a rotation operation, and then determines coordinate position information of the terminal device according to the position change parameter, namely an initial coordinate of each operation point of the terminal device and a vector change parameter (a third vector change parameter) in a coordinate change process.
And step S80, controlling the virtual object to move or rotate according to the position change parameter.
In this embodiment, after the virtual object is placed in the virtual scene, the system automatically establishes a virtual coordinate system, the virtual object has a determined coordinate position in the virtual coordinate system, the terminal device also has a determined coordinate position in the virtual coordinate system, and the virtual object and the terminal device have a relative coordinate position, where the relative coordinate position is a distance between the virtual object and the terminal device. Therefore, when the position parameter of the terminal device changes, the system acquires the third vector change parameter of the terminal device, adjusts the initial coordinate of the feature point of the virtual object in the virtual coordinate system according to the third vector change parameter, and correspondingly adjusts the coordinate of the virtual object according to the position parameter change rule of the terminal device, so that the relative coordinate position of the virtual object and the terminal device is always kept unchanged. When the user holds the terminal device for moving or rotating operation, the virtual object presents corresponding moving or rotating effect.
In this embodiment, the system adjusts the posture of the virtual object in real time according to the change of the posture of the device, so that the posture adjustment of the virtual object integrates the gesture and the posture of the device, and the advantages of the AR product are enhanced.
Referring to fig. 8, fig. 8 is a seventh embodiment of the avatar control method of the present invention, and based on the above sixth embodiment, step S80 includes:
and step S81, adjusting the position and the posture of the virtual object according to the position change parameter and the second position information of the virtual object.
In this embodiment, the system always keeps the relative coordinate positions of the virtual object and the terminal device unchanged, so when the coordinate of the terminal device in the virtual coordinate system changes, the coordinate of the virtual object in the virtual coordinate system changes in the same way.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A control method of an avatar, the control method of the avatar comprising the steps of:
acquiring first position information of a gesture of a user in a virtual scene;
acquiring second position information of a virtual object in the virtual scene;
and controlling the virtual object according to the first position information and the second position information.
2. The avatar control method of claim 1, wherein said step of controlling said virtual object according to said first position information and said second position information comprises:
acquiring a motion track of the gesture according to the first position information of the gesture;
acquiring track parameters of the motion track;
and controlling the virtual object according to the track parameter and the second position information.
3. The avatar control method of claim 2, wherein said step of controlling said virtual object according to said trajectory parameter and said second position parameter comprises:
determining corresponding operation according to the track parameter and the second position information;
controlling the virtual object according to the operation, the operation including at least one of moving, zooming, and rotating.
4. The avatar control method of claim 1, wherein said step of controlling said virtual object according to said first position information and said second position information further comprises:
acquiring a touch position parameter of the gesture according to the first position information of the gesture;
and controlling the virtual object according to the touch position parameter and the second position information.
5. The avatar control method of any one of claims 1 to 4, further comprising, before said step of acquiring first location information of a user's gesture in a virtual scene:
acquiring a real-time scene image shot by a camera;
and taking the real-time scene image as the background of the virtual scene.
6. The avatar control method of claim 5, further comprising, after said step of using said real-time scene image as a background of said virtual scene:
placing the virtual object in the virtual scene.
7. The control method of an avatar according to claim 1, further comprising:
acquiring a position change parameter of the terminal equipment, wherein the position change parameter comprises at least one of a unique parameter and a rotation parameter;
and controlling the virtual object to rotate or move according to the position change parameter.
8. The avatar control method of claim 7, wherein said step of controlling said virtual object to rotate or move according to said position variation parameter comprises:
and adjusting the position and the posture of the virtual object according to the position change parameter and the second position information of the virtual object.
9. A terminal device characterized in that the terminal device includes a memory, a processor, and a control program of an avatar control method stored on the memory and executable on the processor, the control program of the avatar control method realizing the steps of the avatar control method according to any one of claims 1 to 8 when executed by the processor.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a control program of an avatar control method, the control program of the avatar control method implementing the steps of the avatar control method according to any one of claims 1 to 8 when executed by a processor.
CN201911169380.0A 2019-11-25 2019-11-25 Control method of avatar, terminal device and computer readable storage medium Pending CN110941337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911169380.0A CN110941337A (en) 2019-11-25 2019-11-25 Control method of avatar, terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911169380.0A CN110941337A (en) 2019-11-25 2019-11-25 Control method of avatar, terminal device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110941337A true CN110941337A (en) 2020-03-31

Family

ID=69908474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911169380.0A Pending CN110941337A (en) 2019-11-25 2019-11-25 Control method of avatar, terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110941337A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880652A (en) * 2020-07-16 2020-11-03 北京悉见科技有限公司 Method, apparatus and storage medium for moving position of AR object
CN112053449A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium
CN112419509A (en) * 2020-11-27 2021-02-26 上海影创信息科技有限公司 Virtual object generation processing method and system and VR glasses thereof
WO2022021965A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object adjustment method and apparatus, and electronic device, computer storage medium and program
WO2022021980A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object control method and apparatus, and electronic device and storage medium
CN114764327A (en) * 2022-05-09 2022-07-19 北京未来时空科技有限公司 Method and device for manufacturing three-dimensional interactive media and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880652A (en) * 2020-07-16 2020-11-03 北京悉见科技有限公司 Method, apparatus and storage medium for moving position of AR object
WO2022021965A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object adjustment method and apparatus, and electronic device, computer storage medium and program
WO2022021980A1 (en) * 2020-07-30 2022-02-03 北京市商汤科技开发有限公司 Virtual object control method and apparatus, and electronic device and storage medium
CN112053449A (en) * 2020-09-09 2020-12-08 脸萌有限公司 Augmented reality-based display method, device and storage medium
US11594000B2 (en) 2020-09-09 2023-02-28 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
US11989845B2 (en) 2020-09-09 2024-05-21 Beijing Zitiao Network Technology Co., Ltd. Implementation and display of augmented reality
CN112419509A (en) * 2020-11-27 2021-02-26 上海影创信息科技有限公司 Virtual object generation processing method and system and VR glasses thereof
CN114764327A (en) * 2022-05-09 2022-07-19 北京未来时空科技有限公司 Method and device for manufacturing three-dimensional interactive media and storage medium

Similar Documents

Publication Publication Date Title
CN110941337A (en) Control method of avatar, terminal device and computer readable storage medium
CN109062479B (en) Split screen application switching method and device, storage medium and electronic equipment
KR101395426B1 (en) Method, apparatus and computer program product for creating graphical objects with desired physical features for usage in animations
CA2822812C (en) Systems and methods for adaptive gesture recognition
US20200218356A1 (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
CN105148517A (en) Information processing method, terminal and computer storage medium
CN110727496B (en) Layout method and device of graphical user interface, electronic equipment and storage medium
CN111736691A (en) Interactive method and device of head-mounted display equipment, terminal equipment and storage medium
CN111701226A (en) Control method, device and equipment for control in graphical user interface and storage medium
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
CN110866940B (en) Virtual picture control method and device, terminal equipment and storage medium
CN112083989A (en) Interface adjusting method and device
CN107291237B (en) Information processing method and head-mounted electronic equipment
CN111913674B (en) Virtual content display method, device, system, terminal equipment and storage medium
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN113244611B (en) Virtual article processing method, device, equipment and storage medium
EP2725469A2 (en) Information-processing device, program, information-processing method, and information-processing system
CN114995713B (en) Display control method, display control device, electronic equipment and readable storage medium
CN110494915B (en) Electronic device, control method thereof, and computer-readable medium
CN106843676B (en) Touch control method and touch control device for touch terminal
CN113440835B (en) Virtual unit control method and device, processor and electronic device
CN112328164B (en) Control method and electronic equipment
CN113485590A (en) Touch operation method and device
CN107977071B (en) Operation method and device suitable for space system
WO2021172092A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination