CN111161335A - Virtual image mapping method, virtual image mapping device and computer readable storage medium - Google Patents

Virtual image mapping method, virtual image mapping device and computer readable storage medium Download PDF

Info

Publication number
CN111161335A
CN111161335A CN201911424401.9A CN201911424401A CN111161335A CN 111161335 A CN111161335 A CN 111161335A CN 201911424401 A CN201911424401 A CN 201911424401A CN 111161335 A CN111161335 A CN 111161335A
Authority
CN
China
Prior art keywords
joint
dimensional
dimensional coordinates
virtual
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911424401.9A
Other languages
Chinese (zh)
Inventor
杨光
王琛
赵晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL Digital Technology Co Ltd
Original Assignee
Shenzhen TCL Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL Digital Technology Co Ltd filed Critical Shenzhen TCL Digital Technology Co Ltd
Priority to CN201911424401.9A priority Critical patent/CN111161335A/en
Publication of CN111161335A publication Critical patent/CN111161335A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a mapping method of an avatar, which comprises the following steps: acquiring actual two-dimensional coordinates of each joint in a human body two-dimensional image, and determining the reference length of the limb between adjacent joints according to the actual two-dimensional coordinates; determining the depth value of the joint corresponding to the limb according to the reference length and the preset actual length; determining three-dimensional coordinates of the joints in the virtual space according to the depth values and the two-dimensional coordinates of the joints; and displaying a virtual image corresponding to the human body image in a preset virtual space according to the three-dimensional coordinates. The invention also discloses a mapping device of the virtual image and a computer readable storage medium, which determine the three-dimensional coordinates of the human joints by acquiring the two-dimensional coordinates of the human joints and combining the reference length between the adjacent joints, and map the three-dimensional coordinates into the virtual image, thereby realizing the conversion from the two-dimensional coordinates to the three-dimensional coordinates, enabling the motion mapping to get rid of the limitation of motion capture clothes and depth cameras, and being more convenient for the motion mapping.

Description

Virtual image mapping method, virtual image mapping device and computer readable storage medium
Technical Field
The present invention relates to the field of virtual image technology, and in particular, to a method and an apparatus for mapping an virtual image and a computer-readable storage medium.
Background
The technique of mapping human body movements to a three-dimensional avatar has applications in many fields. In the field of movie and television production, actors take corresponding actions by wearing action capture clothes, and the actions of the actors are mapped to virtual characters of movies after technical processing. In the entertainment field, depth camera-based motion capture technology can reflect player motion in real time to a game character.
At present, when human body actions are mapped to a three-dimensional virtual image, the human body actions can be realized only by equipment such as an action capturing garment and a depth camera, and the limitation conditions are more and very inconvenient.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a mapping method, a mapping device and a computer readable storage medium of an avatar, aiming at enabling action mapping to be more convenient through conversion from two-dimensional coordinates to three-dimensional coordinates.
In order to achieve the above object, the present invention provides an avatar mapping method, comprising the steps of:
acquiring actual two-dimensional coordinates of each joint in a human body two-dimensional image, and determining the reference length of limbs between adjacent joints according to the actual two-dimensional coordinates;
determining the depth value of the joint corresponding to the limb according to the reference length and a preset actual length;
determining the three-dimensional coordinates of the joint in a preset virtual space according to the depth value of the joint and the two-dimensional coordinates;
and displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinate.
Optionally, the step of acquiring actual two-dimensional coordinates of each joint in the two-dimensional image of the human body includes:
acquiring a reference two-dimensional coordinate of each joint in a current frame human body image shot by a camera and an adjacent two-dimensional coordinate of each joint in an adjacent image frame;
and determining the actual two-dimensional coordinates of the joints according to the reference two-dimensional coordinates and the adjacent two-dimensional coordinates of each joint.
Optionally, the step of determining the actual two-dimensional coordinates of the joints according to the reference two-dimensional coordinates and the adjacent two-dimensional coordinates of each joint comprises:
acquiring a reference two-dimensional coordinate of the joint and a mean two-dimensional coordinate of the adjacent two-dimensional coordinates;
respectively acquiring a reference two-dimensional coordinate of the joint and confidence degrees of the adjacent two-dimensional coordinates;
calculating the confidence coefficient corresponding to the mean two-dimensional coordinate according to the confidence coefficient;
and when the confidence corresponding to the mean two-dimensional coordinate is greater than the preset confidence, taking the mean two-dimensional coordinate as the actual two-dimensional coordinate of the joint.
Optionally, the step of determining the three-dimensional coordinates of the joint in the preset virtual space according to the depth value of the joint and the two-dimensional coordinates includes:
and determining a three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the preset coordinate direction and the two-dimensional coordinate, wherein the preset coordinate direction is perpendicular to a plane where the two-dimensional coordinate is located.
Optionally, after the step of determining the three-dimensional coordinates of the joint in the preset virtual space according to the depth value of the joint and the two-dimensional coordinates, the method for mapping the avatar further includes:
acquiring an included angle between adjacent limbs, wherein the adjacent limbs are connected with the same joint;
when the included angle is within a preset angle range, executing the step of displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates of the joint;
and when the included angle exceeds the preset angle range, re-determining the three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the opposite direction of the preset coordinate direction and the two-dimensional coordinate, wherein a virtual image corresponding to the human body image is displayed in the preset virtual space according to the re-determined three-dimensional coordinate.
Optionally, the step of displaying the virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates includes:
determining a first virtual joint in the virtual image corresponding to the joint;
acquiring the current three-dimensional coordinates of the first virtual joint;
determining a second virtual joint to be rotated and a rotation angle in the virtual image according to the current three-dimensional coordinate of the first virtual joint and the three-dimensional coordinate of the joint;
and controlling the second virtual joint to rotate according to the rotation angle so as to enable the first virtual joint to move from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint.
Optionally, the step of controlling the second virtual joint to rotate according to the rotation angle includes:
acquiring a three-dimensional distance between a position corresponding to the current three-dimensional coordinate and the target position;
determining a rotation angular velocity corresponding to the second virtual joint according to the three-dimensional distance;
and controlling the second virtual joint to rotate the rotation angle according to the rotation angular velocity.
Optionally, the step of acquiring actual two-dimensional coordinates of each joint in the two-dimensional image of the human body includes:
and receiving the actual two-dimensional coordinates sent by a preset terminal, wherein the preset terminal acquires the human body two-dimensional image, identifies each joint in the human body two-dimensional image, and determines the actual two-dimensional coordinates of each joint according to a preset coordinate system.
Further, to achieve the above object, the present invention also provides an avatar mapping apparatus, comprising: a memory, a processor and a mapping program of an avatar stored on the memory and executable on the processor, the mapping program of the avatar when executed by the processor implementing the steps of the mapping method of the avatar as described in any one of the above.
Further, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a mapping program of an avatar, the mapping program of the avatar, when executed by a processor, implementing the steps of the mapping method of the avatar as described in any one of the above.
The virtual image mapping method, the virtual image mapping device and the computer-readable storage medium provided by the embodiment of the invention are used for acquiring the actual two-dimensional coordinates of each joint in a human body two-dimensional image, determining the reference length of limbs between adjacent joints according to the actual two-dimensional coordinates, determining the depth value of the joint corresponding to the limbs according to the reference length and the preset actual length, determining the three-dimensional coordinates of the joint in a virtual space according to the depth value and the two-dimensional coordinates of the joint, and displaying the virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates. According to the invention, the two-dimensional coordinates of the human body joints are obtained, the three-dimensional coordinates of the human body joints are determined by combining the reference lengths between adjacent joints and are mapped into the virtual image, so that the conversion from the two-dimensional coordinates to the three-dimensional coordinates is realized, the limitation of motion capture clothes and a depth camera is eliminated by motion mapping, and the motion mapping is more convenient.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of the avatar mapping method of the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of the avatar mapping method of the present invention;
FIG. 4 is a flowchart illustrating a mapping method of an avatar according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a mapping method of an avatar according to a fourth embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the depth value calculation principle of the avatar mapping method according to the present invention;
FIG. 7 is a diagram illustrating an output data format of a key point of a neural network.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a solution, which determines the three-dimensional coordinates of the human body joints by acquiring the two-dimensional coordinates of the human body joints and combining the reference length between adjacent joints, and maps the three-dimensional coordinates into the virtual image, thereby realizing the conversion from the two-dimensional coordinates to the three-dimensional coordinates, enabling the motion mapping to get rid of the limitation of motion capture clothes and depth cameras, and being more convenient for the motion mapping.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a smart phone, and can also be terminal equipment with a display function, such as a PC, a tablet computer, a smart television and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a communication bus 1002, and a memory 1003. Wherein a communication bus 1002 is used to enable connective communication between these components. The memory 1003 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). The memory 1003 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1003, which is a kind of computer-readable storage medium, may include therein a mapping program of an operating system and an avatar.
In the terminal shown in fig. 1, the processor 1001 may be configured to call a mapping program of an avatar stored in the memory 1003, and perform the following operations:
acquiring actual two-dimensional coordinates of each joint in a human body two-dimensional image, and determining the reference length of limbs between adjacent joints according to the actual two-dimensional coordinates;
determining the depth value of the joint corresponding to the limb according to the reference length and a preset actual length;
determining the three-dimensional coordinates of the joint in a preset virtual space according to the depth value of the joint and the two-dimensional coordinates;
and displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinate.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
acquiring a reference two-dimensional coordinate of each joint in a current frame human body image shot by a camera and an adjacent two-dimensional coordinate of each joint in an adjacent image frame;
and determining the actual two-dimensional coordinates of the joints according to the reference two-dimensional coordinates and the adjacent two-dimensional coordinates of each joint.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
acquiring a reference two-dimensional coordinate of the joint and a mean two-dimensional coordinate of the adjacent two-dimensional coordinates;
respectively acquiring a reference two-dimensional coordinate of the joint and confidence degrees of the adjacent two-dimensional coordinates;
calculating the confidence coefficient corresponding to the mean two-dimensional coordinate according to the confidence coefficient;
and when the confidence corresponding to the mean two-dimensional coordinate is greater than the preset confidence, taking the mean two-dimensional coordinate as the actual two-dimensional coordinate of the joint.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
and determining a three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the preset coordinate direction and the two-dimensional coordinate, wherein the preset coordinate direction is perpendicular to a plane where the two-dimensional coordinate is located.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
acquiring an included angle between adjacent limbs, wherein the adjacent limbs are connected with the same joint;
when the included angle is within a preset angle range, executing the step of displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates of the joint;
and when the included angle exceeds the preset angle range, re-determining the three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the opposite direction of the preset coordinate direction and the two-dimensional coordinate, wherein a virtual image corresponding to the human body image is displayed in the preset virtual space according to the re-determined three-dimensional coordinate.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
determining a first virtual joint in the virtual image corresponding to the joint;
acquiring the current three-dimensional coordinates of the first virtual joint;
determining a second virtual joint to be rotated and a rotation angle in the virtual image according to the current three-dimensional coordinate of the first virtual joint and the three-dimensional coordinate of the joint;
and controlling the second virtual joint to rotate according to the rotation angle so as to enable the first virtual joint to move from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
acquiring a three-dimensional distance between a position corresponding to the current three-dimensional coordinate and the target position;
determining a rotation angular velocity corresponding to the second virtual joint according to the three-dimensional distance;
and controlling the second virtual joint to rotate the rotation angle according to the rotation angular velocity.
Further, the processor 1001 may call the mapping program of the avatar stored in the memory 1003, and further perform the following operations:
and receiving the actual two-dimensional coordinates sent by a preset terminal, wherein the preset terminal acquires the human body two-dimensional image, identifies each joint in the human body two-dimensional image, and determines the actual two-dimensional coordinates of each joint according to a preset coordinate system.
Referring to fig. 2, in a first embodiment, the avatar mapping method includes the steps of:
step S10, acquiring the actual two-dimensional coordinates of each joint in the human body two-dimensional image, and determining the reference length of the limb between the adjacent joints according to the actual two-dimensional coordinates;
in this embodiment, the actual two-dimensional coordinates correspond to joints in a two-dimensional image of the human body acquired by the non-depth camera, wherein the joints may include a wrist joint, an elbow joint, a knee joint, and the like of the human body. And establishing a two-dimensional coordinate system based on the two-dimensional image of the human body, and determining the actual two-dimensional coordinates of each joint according to the position of each joint in the two-dimensional coordinate system in the two-dimensional image of the human body. By means of the action mapping aiming at the human body two-dimensional image, the calculated amount is far smaller than that of the human body three-dimensional image, the calculation force requirement of the human body action mapping is reduced, the technical scheme of the embodiment can be used for mobile terminal concentration of smart phones and the like, and the condition limitation of the human body action mapping is reduced.
The reference length is the length of the limb between adjacent joints and can also be considered as the two-dimensional distance between two adjacent joints. When the reference length of the limb between the adjacent joints is determined according to the actual two-dimensional coordinates, the reference length is determined according to the actual two-dimensional coordinates of the adjacent two joints, for example, the adjacent two joints are a wrist joint and an elbow joint, the limb between the wrist joint and the elbow joint is a forearm, the length of the forearm is the reference length of the limb between the adjacent joints, if the actual two-dimensional coordinates of the wrist joint in the two-dimensional image plane of the human body are (1,1), and the actual two-dimensional coordinates of the elbow joint in the two-dimensional image plane of the human body are (5,1), the reference length of the forearm between the wrist joint and the elbow joint is 4, and the length unit is a pixel.
Optionally, the terminal of this embodiment may include two terminals at the same time, where one terminal performs the steps of acquiring a two-dimensional image of a human body, identifying each joint in the two-dimensional image of the human body, determining an actual two-dimensional coordinate of each joint according to a preset coordinate system, and sending the actual two-dimensional coordinate to the other terminal, where it should be noted that the actual two-dimensional coordinate of each joint acquired by one terminal here may be a reference two-dimensional coordinate directly identified from the two-dimensional image of the human body, or may be a two-dimensional coordinate determined according to the reference two-dimensional coordinate and an adjacent two-dimensional coordinate. And the other terminal receives the actual two-dimensional coordinates of each joint, further determines the three-dimensional coordinates of the joints, and displays a virtual image corresponding to the human body image according to the preset virtual space. For example, one terminal may be a smart phone terminal for acquiring a two-dimensional image of a human body and transmitting actual two-dimensional coordinates of each joint, and the other terminal may be a smart television terminal for displaying a virtual image corresponding to the human body image, thereby implementing mapping of human body actions to the virtual image. By means of the front end and the back end separation, the limitation condition of mapping the human body motion to the virtual image is less.
Optionally, the terminal of this embodiment may further include a server, for example, the terminal of this embodiment includes a smartphone terminal, a server terminal, and a smart television terminal, and since the step of identifying each joint in the two-dimensional image of the human body is generally implemented by a neural network model, a large amount of computation is required, and the requirement on computation power is high, the terminal of this embodiment is placed at the smartphone terminal or the server terminal with high computation power, so that the mapping from the human body motion to the virtual image is faster, and the motion of the virtual image is smoother.
Step S20, determining the depth value of the joint corresponding to the limb according to the reference length and the preset actual length;
in this embodiment, a preset actual length of the limb between adjacent joints is acquired in advance. The preset actual length may be manually input by a user in advance, or may be acquired through a two-dimensional image of the human body. When a two-dimensional image of a human body is acquired, a user is prompted to face a camera and to control all limbs to be in the same plane as much as possible, namely the plane where the limbs of the user are located is parallel to the plane where the two-dimensional image of the human body is located, at the moment, the reference length of the limbs between adjacent joints is the preset actual length of the limbs, and the preset actual length is stored, wherein the preset actual length comprises the length of the hand, the length of the upper arm, the length of the thigh and the length of the shank of the user. It should be noted that, when the user controls the limb to swing, the reference length of the limb between the adjacent joints is the projection length of the preset actual length of the limb on the plane where the two-dimensional image of the human body is located, and with the swing of the limb, the angle between the straight line where the limb is located and the plane where the two-dimensional image of the human body is located changes, and the reference length of the limb also changes.
The direction corresponding to the depth value is perpendicular to the plane of the human body two-dimensional image, the three-dimensional coordinate of the joint can be determined through the two-dimensional coordinate in the human body two-dimensional image, the depth value and the direction corresponding to the depth value, wherein all dimensions in the three-dimensional coordinate are perpendicular to each other.
Optionally, when the depth value of the joint corresponding to the limb is determined according to the reference length and the preset actual length, the depth value may be determined by an algorithm such as pythagorean theorem. For example, if the preset actual length is taken as the hypotenuse length of the triangle, and the reference length is a right-angle side length of the triangle, the depth value is the other right-angle side length of the triangle, and the depth value can be calculated by the pythagorean theorem, for example, if the preset actual length is 5, the reference length is 4, the depth value is 3, and the length unit is a pixel. For example, as shown in fig. 6, the hand length from the elbow joint to the end of the finger is a preset actual length, the hand-elbow spacing between the elbow joint and the end of the finger is a reference length of the limb between adjacent joints, and the arm depth is a depth value of the elbow joint relative to the end of the finger. It should be noted that the depth value of the joint corresponding to the limb is the depth of one joint relative to another adjacent joint, so the depth value of each joint in the human body can be determined by the depth of the adjacent joint.
Optionally, after determining the depth value of the joint, the depth value is converted from pixel unit to actual length unit, for example, the pixel unit is converted to meter, and the length unit is converted to coefficient S, so as to facilitate the calculation of the coordinate in the subsequent preset virtual space.
Step S30, determining the three-dimensional coordinates of the joint in a preset virtual space according to the depth value of the joint and the two-dimensional coordinates;
in this embodiment, the third coordinate of the joint in the third dimension perpendicular to the plane of the two-dimensional image of the human body is determined according to the depth value of the joint, and the actual three-dimensional coordinate of the joint can be obtained by integrating the third coordinate and the two-dimensional coordinate of the joint. For example, when the depth value of the wrist joint is 3, the third-dimensional coordinate of the wrist joint with respect to the elbow joint is (3) or (-3), and when the actual two-dimensional coordinate of the wrist joint is (1,1) and the actual three-dimensional coordinate of the elbow joint adjacent to the wrist joint is (6,6,6), the actual three-dimensional coordinate of the wrist joint is (1,1, 3) or (1,1, 9), wherein the positive or negative of the third-dimensional coordinate of the wrist joint with respect to the elbow joint is determined according to the direction corresponding to the depth value of the joint. The direction corresponding to the depth value of the joint, namely on a straight line perpendicular to the plane of the two-dimensional image of the human body, the anteroposterior position relationship between the joint and the adjacent joint, and the distance relationship between the joint and the adjacent joint relative to the camera. The conversion from the 2D mild model to the 3D position is realized through the calculation from the two-dimensional coordinates to the three-dimensional coordinates of the human joints.
After the actual three-dimensional coordinates of the joint are determined, the three-dimensional coordinates of the joint in the preset virtual space are determined according to the corresponding relation between the two-dimensional coordinate system based on the human body two-dimensional image and the preset virtual space. For example, when the three-dimensional coordinates of the joint in the preset virtual space are determined according to the length ratio of the two-dimensional coordinate system based on the two-dimensional image of the human body to the preset virtual space, if the actual three-dimensional coordinates of the joint are (1,1, 1) and the ratio is 5, the three-dimensional coordinates of the joint in the preset virtual space are (5,5,5) and the length unit is a pixel.
Optionally, an avatar exists in the preset virtual space to map the limb movement of the user to the preset virtual space, and the avatar is controlled to execute a movement corresponding to the limb movement of the user to form a virtual image.
Optionally, the mapping of the avatar is implemented in a manner of separating front and back ends. Specifically, a human body two-dimensional image is obtained at the rear end, actual two-dimensional coordinates of each joint in the human body two-dimensional image are obtained based on key point detection of a machine learning framework such as TensorFlow, data corresponding to the actual two-dimensional coordinates are converted into a JSON format and transmitted to the front end through a Websocket, so that the front end obtains three-dimensional coordinates according to the actual two-dimensional coordinates, actions corresponding to the three-dimensional coordinates are mapped to an avatar to perform graphic rendering, and application of avatar mapping is expanded through the front-end and rear-end separation mode. Wherein, the back end can be a server with stronger calculation power so as to realize more fluent motion capture. Optionally, a tensrflow deep learning frame is arranged in the back-end module, the neural network is MobileNet, and the corresponding network weight is a preset value and can be obtained through early training. As shown in fig. 7, fig. 7 is an output data format of the keypoint identified by the neural network, where y in fig. 7 is a vertical coordinate, x is a horizontal coordinate, and (x, y) is a two-dimensional coordinate of the keypoint.
And step S40, displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates.
In this embodiment, after determining the three-dimensional coordinates of a joint in the preset virtual space, the three-dimensional coordinates of the joint are used as the target coordinates corresponding to the virtual joint corresponding to the joint in the virtual image corresponding to the human body in the preset virtual space. And controlling the virtual joint corresponding to the joint in the virtual image to move from the current position to the position corresponding to the target coordinate, so that the mapping from the human body action to the virtual image action can be realized. And displaying an image corresponding to the virtual image, namely displaying a virtual image corresponding to the human body image in the process of controlling the virtual joint in the virtual image to move from the current position to the position corresponding to the target coordinate. In the virtual image, generally, meters are used as length units, and after the three-dimensional coordinates of the joints are converted into the three-dimensional coordinates with meters as the length units through the length unit conversion coefficient S, the human body actions are more conveniently mapped to the virtual image.
In the technical scheme disclosed by the embodiment, the three-dimensional coordinates of the human body joints are determined by acquiring the two-dimensional coordinates of the human body joints and combining the reference length between the adjacent joints, and the three-dimensional coordinates are mapped to the virtual image, so that the conversion from the two-dimensional coordinates to the three-dimensional coordinates is realized, the limitation of a motion capture garment and a depth camera is eliminated by motion mapping, and the motion mapping is more convenient.
In the second embodiment, as shown in fig. 3, on the basis of the embodiment shown in fig. 2 described above, step S10 includes:
step S11, acquiring the reference two-dimensional coordinates of each joint in the current frame human body image shot by the camera and the adjacent two-dimensional coordinates of each joint in the adjacent image frame;
in this embodiment, because the identification of each joint in the two-dimensional image of the human body captured by the camera is not accurate, the coordinates of each joint directly read from the two-dimensional image of the human body may be inaccurate, and therefore, the two-dimensional coordinates of each joint directly read in the current frame human body image captured by the camera, that is, the reference two-dimensional coordinates of each joint, and the two-dimensional coordinates of each joint directly read in the adjacent image frame captured by the camera, that is, the adjacent two-dimensional coordinates of each joint may be obtained. The shooting time corresponding to the adjacent image frame is adjacent to the shooting time corresponding to the current frame, but the adjacent image frame is not limited to the previous frame and the next frame of the current frame, and may include a previous multiframe and a next multiframe. For example, when the frame rate of the camera shooting the image is 1 frame per second, the image frames corresponding to the first two seconds and the last three seconds of the shooting time corresponding to the current frame are acquired and are taken as the adjacent image frames.
And step S12, determining the actual two-dimensional coordinates of the joints according to the reference two-dimensional coordinates of each joint and the adjacent two-dimensional coordinates.
In the embodiment, the actual two-dimensional coordinates of each joint are more accurate in a filtering mode. For example, the actual two-dimensional coordinates of the joint may be determined by moving average filtering, specifically, the reference two-dimensional coordinates of the joint and the mean two-dimensional coordinates of the adjacent two-dimensional coordinates are calculated, and the mean two-dimensional coordinates are taken as the actual two-dimensional coordinates of the joint in the current frame human body image, for example, when the window of the moving average filtering is 3, the reference two-dimensional coordinates of the joint, the two-dimensional coordinates of the joint in the previous frame image and the next frame image are acquired, and if the reference two-dimensional coordinates of the joint are (110,60), the two-dimensional coordinates of the joint in the previous frame image are (100,50), and the two-dimensional coordinates of the joint in the next frame image are (105,55), the actual two-dimensional coordinates of the joint are (105,55), and the unit length is. It should be noted that, the moving average filtering takes the current frame as the center to obtain the images of at least one frame before and at least one frame after the current frame, so that the current frame is different and the corresponding adjacent image frames are different with the change of the current time. Through filtering, the actual two-dimensional coordinates of the joints are more accurate, virtual joint jitter and drifting of virtual images caused by errors of the actual two-dimensional coordinates of the joints are reduced, and the virtual characters move more naturally.
Optionally, after the mean two-dimensional coordinate is obtained, it may be further determined whether the mean two-dimensional coordinate is trusted, and when the mean two-dimensional coordinate is trusted, the mean two-dimensional coordinate is used as the actual two-dimensional coordinate of the joint, and when the mean two-dimensional coordinate is not trusted, the current mean two-dimensional coordinate is discarded, the three-dimensional coordinate of the joint is not calculated and is not mapped into the avatar, or when the mean two-dimensional coordinate is not trusted, the reference two-dimensional coordinate of the joint is used as the actual two-dimensional coordinate of the joint.
Alternatively, when judging whether the mean two-dimensional coordinate is reliable, the determination is made by the law of dermonen (De Morgan' slides). Specifically, the confidence degrees of a reference two-dimensional coordinate and an adjacent two-dimensional coordinate of the joint are obtained, and the confidence degree of the mean two-dimensional coordinate is calculated according to the reference two-dimensional coordinate and the confidence degree of the adjacent two-dimensional coordinate of the joint, wherein the reference two-dimensional coordinate of the joint and the confidence degree of the adjacent two-dimensional coordinate can be obtained by identifying a neural network model at each joint in a human body two-dimensional image. When the adjacent two-dimensional coordinates of the joint correspond to the images of the previous frame and the next frame of the current frame, the calculation formula of the confidence coefficient of the mean two-dimensional coordinates is as follows:
P=1-(1-P1)*(1-P2)*(1-P3),
wherein P is the confidence of the mean two-dimensional coordinate, P1As confidence of the reference two-dimensional coordinates of the joint, P2As confidence of adjacent two-dimensional coordinates of the joint in the image of the previous frame of the current frame, P3Is the confidence of the adjacent two-dimensional coordinates of the joint in the image of the next frame of the current frame. It should be noted that, when there are adjacent two-dimensional coordinates of the joint in the previous multiframe image and the subsequent multiframe image of the current frame, the above calculation formula can be correspondingly expanded.
And when the confidence coefficient of the mean two-dimensional coordinate is greater than the preset confidence coefficient, judging that the mean two-dimensional coordinate is credible, and when the confidence coefficient of the mean two-dimensional coordinate is less than or equal to the preset confidence coefficient, judging that the mean two-dimensional coordinate is not credible, and discarding the mean two-dimensional coordinate as a noise point to filter out a high-frequency noise point in the two-dimensional coordinate. Through the test of the confidence coefficient of the mean two-dimensional coordinate, the accuracy of the actual two-dimensional coordinate of the joint is further improved, the shaking and drifting of the virtual joint of the virtual image are further reduced, and the movement of the virtual character is more natural.
In the technical scheme disclosed in the embodiment, the actual two-dimensional coordinates of the joint are determined through the reference two-dimensional coordinates of the joint in the current frame human body image and the adjacent two-dimensional coordinates of the joint in the adjacent image frame, the actual two-dimensional coordinates of the joint are more accurate through the integration of a plurality of two-dimensional coordinates, the virtual joint of the virtual image shakes less and drifts less, the motion of the virtual character is more natural, the two-dimensional coordinates after filtering and the filtering of the two-dimensional coordinates through filtering and confidence degrees, the fitting degree of the two-dimensional coordinates after filtering and the actual joint position is higher, and the offset and the jittering of the virtual joint in the virtual.
In yet another embodiment, as shown in fig. 4, on the basis of the embodiment shown in any one of fig. 2 to 3, the step S30 includes:
step S31, determining the three-dimensional coordinate of the joint in the preset virtual space according to the depth value, the preset coordinate direction and the two-dimensional coordinate,
and the preset coordinate direction is perpendicular to the plane of the two-dimensional coordinate.
In this embodiment, after the depth value of a joint is determined, a preset coordinate direction is obtained to determine a third-dimension coordinate of the joint relative to another adjacent joint, where the preset coordinate direction is a preset direction, and the preset coordinate direction is perpendicular to a plane where two-dimensional coordinates of a two-dimensional image of a human body are located, that is, a direction of the third dimension. For example, when the depth value of a joint is 3, the third-dimension coordinate of the joint with respect to another adjacent joint is (3) in the preset coordinate direction, and the third-dimension coordinate of the joint with respect to another adjacent joint is (-3) in the opposite direction of the preset coordinate direction. When the actual two-dimensional coordinate of the joint is (1,1) and the actual three-dimensional coordinate of the adjacent other joint is (6,6,6), the actual three-dimensional coordinate of the joint is (1,1, 9) in the preset coordinate direction, and the actual three-dimensional coordinate of the joint is (1,1, 3) in the opposite direction of the preset coordinate direction.
After the actual three-dimensional coordinates of the joint are determined, the three-dimensional coordinates of the joint in the preset virtual space are determined according to the corresponding relation between the two-dimensional coordinate system based on the human body two-dimensional image and the preset virtual space. For example, when the three-dimensional coordinates of the joint in the preset virtual space are determined according to the length ratio of the two-dimensional coordinate system based on the two-dimensional image of the human body to the preset virtual space, if the actual three-dimensional coordinates of the joint are (1,1, 1) and the ratio is 5, the three-dimensional coordinates of the joint in the preset virtual space are (5,5,5) and the length unit is a pixel.
Optionally, in order to determine whether the third dimension direction of the joint is the preset coordinate direction or the opposite direction of the preset coordinate direction, after determining the three-dimensional coordinates of the joint according to the preset coordinate direction, detecting an included angle between adjacent limbs, when the included angle is within a preset angle range, indicating that the motion to be mapped to the avatar is correct and consistent with the normal motion of the human body, and executing a step of displaying a virtual image corresponding to the human body image in a preset virtual space according to the three-dimensional coordinates of the joint, wherein the preset angle range is a normal motion range of the human body joint, and normal motion ranges corresponding to different joints are different. The adjacent limbs are connected to the same joint, so that the included angle between the adjacent limbs is used as the current activity angle of the joint. When the included angle exceeds the preset angle range, the included angle represents that the motion error to be mapped to the virtual image is not consistent with the normal motion of the human body, the third dimension direction of the joint is not the preset coordinate direction but the opposite direction of the preset coordinate direction, therefore, the three-dimensional coordinate of the joint in the preset virtual space can be re-determined according to the opposite direction of the preset coordinate direction, the joint depth value and the two-dimensional coordinate, and the virtual image corresponding to the human body image is displayed in the preset virtual space according to the re-determined three-dimensional coordinate.
In the technical scheme disclosed in this embodiment, the three-dimensional coordinates of the joint in the preset virtual space are determined according to the depth value, the preset coordinate direction and the two-dimensional coordinates, so as to ensure the accuracy of the three-dimensional coordinates of the joint.
In another embodiment, as shown in fig. 5, on the basis of the embodiment shown in any one of fig. 2 to 4, step S40 includes:
step S41 of determining a first virtual joint in the virtual image corresponding to the joint;
step S42, acquiring the current three-dimensional coordinates of the first virtual joint;
in this embodiment, after determining the three-dimensional coordinates of the joint in the preset virtual space, according to the mapping relationship between the human body and the virtual image, a first virtual joint corresponding to the joint in the virtual image is determined, and the current three-dimensional coordinates of the first virtual joint are determined. From the current three-dimensional coordinates of the first virtual joint, the current position of the first virtual joint may be determined.
Step S43, determining a second virtual joint to be rotated and a rotation angle in the virtual image according to the current three-dimensional coordinates of the first virtual joint and the three-dimensional coordinates of the joint;
in the embodiment, the human body motion is mapped to the virtual image, namely the first virtual joint in the virtual image is controlled to move from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint in the preset virtual space. Therefore, according to the current three-dimensional coordinates of the first virtual joint and the three-dimensional coordinates of the joint, a second virtual joint to be rotated and a rotation angle in the virtual image are solved through Inverse Kinematics (IK). For example, when the wrist joint is moved from the (1,1, 1) position to the (2,2,2) position, it is necessary to control the elbow joint adjacent to the wrist joint to rotate, and the position of the wrist joint is changed by the rotation of the elbow joint.
Inverse kinematics is the process of determining the parameters of an articulating movable object to be set to achieve a desired pose. For example, given a three-dimensional model of a human body, how to set the angles of the wrist and elbow so that the handle changes from a relaxed position to a waving posture. This problem is critical in robotics because the steering robotic arm is controlled by joint angle. Inverse kinematics is also important in game programming and three-dimensional modeling. Wherein, the inverse kinematics function expression is as follows:
Figure BDA0002346927570000141
the three-dimensional coordinates of the joints are (x, Y, z), the current three-dimensional coordinates of the first virtual joint are (O, A, T), and q is the rotation angle of each joint solved by inverse kinematics, because the target position corresponding to the first virtual joint may exceed a certain range, the first virtual joint cannot reach the target position from the position corresponding to the current three-dimensional coordinates through the rotation of a single joint, so that a plurality of joints to be rotated are solved by inverse kinematics, and each joint corresponds to one rotation angle theta, the rotation angle theta 1 corresponding to the second virtual joint, the rotation angle theta 2 corresponding to the third virtual joint, and the rotation angle theta n corresponding to the (n +1) th virtual joint.
Step S44, controlling the second virtual joint to rotate according to the rotation angle, so that the first virtual joint moves from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint.
In this embodiment, after the second virtual joint and the rotation angle that need to be rotated in the virtual image are determined, the second virtual joint in the virtual image is controlled to rotate according to the rotation angle, so that the first virtual joint moves from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint, and the mapping from the human body motion to the virtual image is realized.
Optionally, before controlling the second virtual joint to rotate according to the rotation angle, a three-dimensional distance between a position corresponding to the current three-dimensional coordinate and a target position corresponding to the three-dimensional coordinate of the joint is determined according to the current three-dimensional coordinate and the three-dimensional coordinate of the joint. And calculating a corresponding rotation angular velocity when the second virtual joint rotates according to the three-dimensional distance so as to enable the three-dimensional distance to be reduced at a constant speed through the continuous change of the rotation angular velocity. The second virtual joint is controlled to rotate according to the rotation angular velocity and the rotation angle, so that the smooth motion of the first virtual joint is realized, and the action of the virtual character is more natural.
Alternatively, because there are redundant degrees of freedom in human joints, it may be ensured that the movements of the avatar are fitted to the actual movements of the user by rotation of multiple virtual joints in the avatar at the same time.
In the technical scheme disclosed in this embodiment, a second virtual joint and a rotation angle that need to be rotated in a virtual image are obtained according to a first virtual joint corresponding to a joint in the virtual image and a current three-dimensional coordinate of the first virtual joint, and the second virtual joint is controlled to rotate according to the rotation angle, so that the movement of the first virtual joint is realized, and the mapping from a human body motion to the virtual image is realized.
In addition, an embodiment of the present invention further provides an avatar mapping apparatus, where the avatar mapping apparatus includes: a memory, a processor and a mapping program of an avatar stored on the memory and operable on the processor, the mapping program of the avatar being executed by the processor to implement the steps of the mapping method of the avatar as described in the above various embodiments.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, on which a mapping program of an avatar is stored, and the mapping program of the avatar, when executed by a processor, implements the steps of the mapping method of the avatar as described in the above embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for mapping an avatar, the method comprising the steps of:
acquiring actual two-dimensional coordinates of each joint in a human body two-dimensional image, and determining the reference length of limbs between adjacent joints according to the actual two-dimensional coordinates;
determining the depth value of the joint corresponding to the limb according to the reference length and a preset actual length;
determining the three-dimensional coordinates of the joint in a preset virtual space according to the depth value of the joint and the two-dimensional coordinates;
and displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinate.
2. The avatar mapping method of claim 1, wherein said step of acquiring actual two-dimensional coordinates of each joint in the two-dimensional image of the human body comprises:
acquiring a reference two-dimensional coordinate of each joint in a current frame human body image shot by a camera and an adjacent two-dimensional coordinate of each joint in an adjacent image frame;
and determining the actual two-dimensional coordinates of the joints according to the reference two-dimensional coordinates and the adjacent two-dimensional coordinates of each joint.
3. The avatar mapping method of claim 2, wherein said step of determining actual two-dimensional coordinates of said joints from said reference two-dimensional coordinates and adjacent two-dimensional coordinates of each of said joints comprises:
acquiring a reference two-dimensional coordinate of the joint and a mean two-dimensional coordinate of the adjacent two-dimensional coordinates;
respectively acquiring a reference two-dimensional coordinate of the joint and confidence degrees of the adjacent two-dimensional coordinates;
calculating the confidence coefficient corresponding to the mean two-dimensional coordinate according to the confidence coefficient;
and when the confidence corresponding to the mean two-dimensional coordinate is greater than the preset confidence, taking the mean two-dimensional coordinate as the actual two-dimensional coordinate of the joint.
4. The avatar mapping method of claim 1, wherein said step of determining three-dimensional coordinates of said joints in a preset virtual space based on depth values of said joints and said two-dimensional coordinates comprises:
and determining a three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the preset coordinate direction and the two-dimensional coordinate, wherein the preset coordinate direction is perpendicular to a plane where the two-dimensional coordinate is located.
5. The avatar mapping method of claim 4, wherein after said step of determining three-dimensional coordinates of said joints in a preset virtual space based on depth values of said joints and said two-dimensional coordinates, said avatar mapping method further comprises:
acquiring an included angle between adjacent limbs, wherein the adjacent limbs are connected with the same joint;
when the included angle is within a preset angle range, executing the step of displaying a virtual image corresponding to the human body image in the preset virtual space according to the three-dimensional coordinates of the joint;
and when the included angle exceeds the preset angle range, re-determining the three-dimensional coordinate of the joint in a preset virtual space according to the depth value, the opposite direction of the preset coordinate direction and the two-dimensional coordinate, wherein a virtual image corresponding to the human body image is displayed in the preset virtual space according to the re-determined three-dimensional coordinate.
6. The avatar mapping method of claim 1, wherein said displaying a virtual image corresponding to said human body image in said preset virtual space according to said three-dimensional coordinates comprises:
determining a first virtual joint in the virtual image corresponding to the joint;
acquiring the current three-dimensional coordinates of the first virtual joint;
determining a second virtual joint to be rotated and a rotation angle in the virtual image according to the current three-dimensional coordinate of the first virtual joint and the three-dimensional coordinate of the joint;
and controlling the second virtual joint to rotate according to the rotation angle so as to enable the first virtual joint to move from the position corresponding to the current three-dimensional coordinate to the target position corresponding to the three-dimensional coordinate of the joint.
7. The avatar mapping method of claim 6, wherein said step of controlling said second virtual joint to rotate according to said rotation angle comprises:
acquiring a three-dimensional distance between a position corresponding to the current three-dimensional coordinate and the target position;
determining a rotation angular velocity corresponding to the second virtual joint according to the three-dimensional distance;
and controlling the second virtual joint to rotate the rotation angle according to the rotation angular velocity.
8. The avatar mapping method of claim 1, wherein said step of acquiring actual two-dimensional coordinates of each joint in the two-dimensional image of the human body comprises:
and receiving the actual two-dimensional coordinates sent by a preset terminal, wherein the preset terminal acquires the human body two-dimensional image, identifies each joint in the human body two-dimensional image, and determines the actual two-dimensional coordinates of each joint according to a preset coordinate system.
9. An avatar mapping apparatus, comprising: memory, processor and a mapping program of an avatar stored on said memory and executable on said processor, said mapping program of an avatar when executed by said processor implementing the steps of the mapping method of an avatar according to any of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a mapping program of an avatar, which when executed by a processor implements the steps of the mapping method of an avatar according to any one of claims 1 to 8.
CN201911424401.9A 2019-12-30 2019-12-30 Virtual image mapping method, virtual image mapping device and computer readable storage medium Pending CN111161335A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424401.9A CN111161335A (en) 2019-12-30 2019-12-30 Virtual image mapping method, virtual image mapping device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424401.9A CN111161335A (en) 2019-12-30 2019-12-30 Virtual image mapping method, virtual image mapping device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111161335A true CN111161335A (en) 2020-05-15

Family

ID=70560758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424401.9A Pending CN111161335A (en) 2019-12-30 2019-12-30 Virtual image mapping method, virtual image mapping device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111161335A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium
CN113365085A (en) * 2021-05-31 2021-09-07 北京字跳网络技术有限公司 Live video generation method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111880657B (en) * 2020-07-30 2023-04-11 北京市商汤科技开发有限公司 Control method and device of virtual object, electronic equipment and storage medium
CN113365085A (en) * 2021-05-31 2021-09-07 北京字跳网络技术有限公司 Live video generation method and device
CN113365085B (en) * 2021-05-31 2022-08-16 北京字跳网络技术有限公司 Live video generation method and device
WO2022252823A1 (en) * 2021-05-31 2022-12-08 北京字跳网络技术有限公司 Method and apparatus for generating live video

Similar Documents

Publication Publication Date Title
JP7273880B2 (en) Virtual object driving method, device, electronic device and readable storage medium
CN104536579B (en) Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
CN106527709B (en) Virtual scene adjusting method and head-mounted intelligent device
CN110728739B (en) Virtual human control and interaction method based on video stream
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
WO2023071964A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN102508546A (en) Three-dimensional (3D) virtual projection and virtual touch user interface and achieving method
EP3644826A1 (en) A wearable eye tracking system with slippage detection and correction
CN112381003B (en) Motion capture method, motion capture device, motion capture equipment and storage medium
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
CN109144252B (en) Object determination method, device, equipment and storage medium
CN109952552A (en) Visual cues system
US20120268493A1 (en) Information processing system for augmented reality
US20150339859A1 (en) Apparatus and method for navigating through volume image
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
CN107145822B (en) User somatosensory interaction calibration method and system deviating from depth camera
US20170140215A1 (en) Gesture recognition method and virtual reality display output device
CN111161335A (en) Virtual image mapping method, virtual image mapping device and computer readable storage medium
CN115546365A (en) Virtual human driving method and system
WO2023273372A1 (en) Gesture recognition object determination method and apparatus
CN109531578B (en) Humanoid mechanical arm somatosensory control method and device
JP5759439B2 (en) Video communication system and video communication method
WO2022014700A1 (en) Terminal device, virtual object manipulation method, and virtual object manipulation program
CN113496168B (en) Sign language data acquisition method, device and storage medium
CN114201028B (en) Augmented reality system and method for anchoring display virtual object thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination