CN111103967A - Control method and device of virtual object - Google Patents

Control method and device of virtual object Download PDF

Info

Publication number
CN111103967A
CN111103967A CN201811251781.6A CN201811251781A CN111103967A CN 111103967 A CN111103967 A CN 111103967A CN 201811251781 A CN201811251781 A CN 201811251781A CN 111103967 A CN111103967 A CN 111103967A
Authority
CN
China
Prior art keywords
hand
virtual object
information
human hand
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811251781.6A
Other languages
Chinese (zh)
Inventor
罗国中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811251781.6A priority Critical patent/CN111103967A/en
Publication of CN111103967A publication Critical patent/CN111103967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a virtual object and a human hand; acquiring the position of a first virtual object; identifying a first hand in the video to obtain first information of the first hand; identifying a first motion of a first human hand; and controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object in the prior art is not flexible.

Description

Control method and device of virtual object
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for controlling a virtual object, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. The beautifying function of the intelligent terminal generally comprises beautifying processing effects of skin color adjustment, skin grinding, large eye, face thinning and the like, and can perform beautifying processing of the same degree on all faces recognized in an image. At present, there are also applications that can implement simple functions of displaying virtual objects, such as displaying a fixed virtual object at a fixed position on a screen, and the virtual object can perform some simple actions.
However, the current virtual object can only be displayed at a fixed position and at a fixed time, and if the virtual display attribute needs to be changed, the virtual object itself needs to be directly modified or the virtual object needs to be controlled through a control, so that the control of the virtual object is very inflexible.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides a method for controlling a virtual object, including: acquiring a video, wherein the video comprises a virtual object and a human hand; acquiring the position of a first virtual object; identifying a first hand in the video to obtain first information of the first hand; identifying a first motion of a first human hand; and controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object.
Further, the identifying a first hand in the video to obtain first information of the first hand includes:
and identifying a first hand in the video, and acquiring a first position of the first hand and key points of the first hand.
Further, the recognizing the first action of the first hand comprises: and recognizing the gesture of the first hand according to the key points of the first hand.
Further, the displaying of the first virtual object according to the first motion of the first human hand, the first information of the first human hand, and the position of the first virtual object includes: and when the first position of the first hand and the position of the virtual object are smaller than a first threshold value and the first action of the first hand is a preset first action, controlling the display attribute of the first virtual object.
Further, the controlling the display property of the first virtual object includes: acquiring image rendering information according to the preset first action and the first information; and controlling the display attribute of the first virtual object according to the image rendering information.
Further, the display attributes include: whether to display, the location of the display, the color of the display, the size of the display, and the transparency of the display.
Further, after the displaying of the first virtual object according to the first motion of the first hand, the first information of the first hand, and the position of the first virtual object, the method further includes: tracking the first hand to obtain second information of the first hand; identifying a second motion of the first human hand; and controlling the display of the first virtual object according to the second action of the first hand and the second information of the first hand.
Further, the tracking the first hand to obtain second information of the first hand includes: and tracking the movement of the first hand, and acquiring a second position of the first hand and key points of the first hand when the first hand stops moving.
Further, the controlling the display of the first virtual object according to the second motion of the first hand and the second information of the first hand includes: and when the second action of the first hand is a preset second action, controlling the display attribute of the virtual object according to the second information of the first hand.
Further, the video further includes a second virtual object, and the control method further includes: acquiring the position of a second virtual object; identifying a second hand in the video to obtain first information of the second hand; identifying a first motion of a second human hand; and controlling the display of the second virtual object according to the first action of the second hand, the first information of the second hand and the position of the second virtual object.
In a second aspect, an embodiment of the present disclosure provides a control apparatus for a virtual object, including:
the video acquisition module is used for acquiring a video, and the video comprises a virtual object and a human hand;
the position acquisition module is used for acquiring the position of the first virtual object;
the human hand information acquisition module is used for identifying a first human hand in the video to obtain first information of the first human hand;
the human hand motion recognition module is used for recognizing a first motion of a first human hand;
and the virtual object control module is used for controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object.
Further, the human hand information acquiring module includes:
and the human hand recognition module is used for recognizing a first human hand in the video and acquiring a first position of the first human hand and key points of the first human hand.
Further, the human hand motion recognition module is configured to: and recognizing the gesture of the first hand according to the key points of the first hand.
Further, the virtual object control module is configured to:
and when the first position of the first hand and the position of the virtual object are smaller than a first threshold value and the first action of the first hand is a preset first action, controlling the display attribute of the first virtual object.
Further, the virtual object control module is configured to:
acquiring image rendering information according to the preset first action and the first information;
and controlling the display attribute of the first virtual object according to the image rendering information.
Further, the display attributes include: whether to display, the location of the display, the color of the display, the size of the display, and the transparency of the display.
Further, the control apparatus for the virtual object further includes:
the tracking module is used for tracking the first hand to obtain second information of the first hand;
the human hand motion recognition first module is used for recognizing a second motion of the first human hand;
and the virtual object control first module is used for controlling the display of the first virtual object according to the second action of the first hand and the second information of the first hand.
Further, the tracking module is configured to track movement of the first hand, and when the first hand stops moving, obtain a second position of the first hand and a key point of the first hand.
Further, the virtual object controls a first module, configured to control a display attribute of the virtual object according to second information of the first human hand when a second motion of the first human hand is a predetermined second motion.
Further, the control apparatus for the virtual object further includes:
a position acquisition first module for acquiring a position of the second virtual object;
the human hand information acquisition first module is used for identifying a second human hand in the video to obtain first information of the second human hand;
the human hand motion recognition second module is used for recognizing the first motion of a second human hand;
and the virtual object control second module is used for controlling the display of the second virtual object according to the first action of the second hand, the first information of the second hand and the position of the second virtual object.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of controlling the virtual object of any one of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute a control method of any one of the virtual objects in the foregoing first aspect.
The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a virtual object and a human hand; acquiring the position of a first virtual object; identifying a first hand in the video to obtain first information of the first hand; identifying a first motion of a first human hand; and controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object in the prior art is not flexible.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of a first embodiment of a method for controlling a virtual object according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a second embodiment of a method for controlling a virtual object according to the present disclosure;
fig. 3 is a flowchart of a third embodiment of a method for controlling a virtual object according to the present disclosure;
fig. 4a to 4e are schematic diagrams of specific examples of a control method for a virtual object according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a first embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a second embodiment of a control apparatus for a virtual object according to the present disclosure;
fig. 7 is a schematic structural diagram of a third embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of a method for controlling a virtual object according to an embodiment of the present disclosure, where the method for controlling a virtual object according to this embodiment may be executed by a control apparatus of a virtual object, the control apparatus of a virtual object may be implemented as software, or implemented as a combination of software and hardware, and the control apparatus of a virtual object may be integrally disposed in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring a video, wherein the video comprises a first virtual object and a human hand;
the acquired video may be acquired by an image sensor, which refers to various devices that can capture images, and typical image sensors are video cameras, still cameras, and the like. In this embodiment, the image sensor may be a camera on a mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and a video image acquired by the camera may be directly displayed on a display screen of the smart phone.
The video includes virtual objects, where the virtual objects may be any 2D or 3D virtual objects, typically virtual weapons such as virtual swords, virtual handguns, virtual stationery such as virtual pens, virtual books, virtual wearable articles such as virtual gloves, virtual rings, etc., virtual stars, moon, etc., and no specific limitation is made herein, and any virtual object may be incorporated into the present disclosure. The virtual object may be of a type, such as a type suitable for hand grasping, such as a sword, pistol, pen, as described above, a type suitable for wearing, such as a glove, ring, etc., a type suitable for placement on a palm, such as a book, etc., although more than one type of virtual object may be present, such as a book, either on a palm or a hand. In this step, the type of the virtual object may be acquired while the virtual object is acquired, and the type of the acquired virtual object may be acquired directly from the attribute data of the virtual object, or the ID of the virtual object may be acquired, and the type of the ID is queried through the ID, and the type acquisition manner may be optional, and any manner may be applied to the present disclosure.
The video also comprises a human hand, and the human hand can be a human hand collected by the image sensor.
Step S102, acquiring the position of a first virtual object;
in this step, the positions of all virtual objects in the video, which may be the coordinates of the virtual objects in the display device, may be the coordinates of the center point of the virtual object or the coordinates of a certain characteristic point of the virtual object, and this is not limited specifically herein
Step S103, identifying a first hand in the video to obtain first information of the first hand;
when the human hand is recognized, the position of the human hand can be positioned by using the color features, the human hand is segmented from the background, and feature extraction and recognition are carried out on the found and segmented human hand image. Specifically, color information of an image and position information of the color information are acquired by using an image sensor; comparing the color information with preset hand color information; identifying first color information, wherein the error between the first color information and the preset human hand color information is smaller than a first threshold value; and forming the outline of the human hand by using the position information of the first color information. Preferably, in order to avoid interference of the ambient brightness to the color information, image data of an RGB color space acquired by the image sensor may be mapped to an HSV color space, information in the HSV color space is used as contrast information, and preferably, a hue value in the HSV color space is used as color information, so that the hue information is minimally affected by brightness, and the interference of the brightness can be well filtered. The position of the human hand is roughly determined by using the human hand outline, and then the key point extraction is carried out on the human hand. The method comprises the steps of extracting key points of a human hand on an image, namely searching corresponding position coordinates of each key point of a human hand outline in a human hand image, namely key point positioning, wherein the process needs to be carried out based on the corresponding characteristics of the key points, searching and comparing in the image according to the characteristics after the image characteristics capable of clearly identifying the key points are obtained, and accurately positioning the positions of the key points on the image. Since the keypoints only occupy a very small area (usually only a few to tens of pixels) in the image, the regions occupied by the features corresponding to the keypoints on the image are usually very limited and local, and there are two feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image characteristics of the key point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the key points used by the various implementation methods are different, and the method is suitable for different application scenes. Similarly, for other target objects, the same principles can be used to identify the target object.
After the human hand is recognized, a polygon is defined outside the outer contour of the human hand to serve as an external detection frame of the human hand, the external detection frame is used for replacing the human hand and describing the position of the human hand, a rectangle is taken as an example, after key points of the human hand are recognized, the width of the widest part of the human hand and the length of the longest part of the human hand can be calculated, and the external detection frame of the human hand is recognized according to the width and the length. One implementation of calculating the longest and widest points of the human hand is to extract the boundary key points of the human hand, calculate the difference between the X coordinates of the two boundary key points with the farthest X coordinate distance as the length of the rectangle width, and calculate the difference between the Y coordinates of the two boundary key points with the farthest Y coordinate distance as the length of the rectangle length. If the hand contracts into a fist shape, the external detection frame can be set to be a minimum circle covering the fist. Specifically, the center point of the external detection frame can be used as the position of the hand, and the center point of the external detection frame is the intersection point of the diagonals of the external detection frame; the centre of the circle may also be substituted for the location of the fist.
The human hand information further includes detected human hand key points, the number of the key points may be set, and generally, the human hand information may include key points and joint key points of a human hand contour, each key point has a fixed number, for example, the key points may be numbered from top to bottom according to the sequence of the contour key point, the thumb joint key point, the index finger joint key point, the middle finger joint key point, the ring finger joint key point, and the little finger joint key point, in a typical application, the number of the key points is 22, and each key point has a fixed number. In one embodiment, the location of the human hand may also be represented using a keypoint of the palm center.
In one embodiment, before calculating the human hand information, the method further comprises the step of performing smoothing and coordinate normalization processing on the identification data of the human hand. Specifically, the smoothing process may be averaging images in the multi-frame video, taking the averaged image as an identified image, corresponding to a human hand in the present disclosure, identifying the human hand in the multi-frame image, then performing weighted averaging on the human hand image, taking the human hand image obtained after averaging as the identified human hand, and calculating the human hand information. The coordinate normalization processing is to unify the coordinate range, and if the coordinates of the hand image collected by the camera and the hand image displayed on the display screen are not unified, a mapping relation is needed to map the large coordinate system to the small coordinate system. And obtaining the information of the human hand after smoothing processing and normalization processing.
Step S104, recognizing a first action of a first hand;
the human hand motion can comprise a gesture and/or a motion track of the human hand;
the gesture recognition may be performed by using the hand image information obtained in step S103, and placing the hand image information into a deep learning model for recognition, for example, inputting the key point information of the hand into the deep learning model to recognize the gesture of the hand, which is not described herein again.
In this step, the motion of the human hand can be recognized, the motion of the human hand is recorded, and the motion track is analyzed to recognize the motion. Specifically, the movement of the human hand needs to be tracked firstly by recording the movement track of the human hand, in the human hand movement recognition system based on vision, the tracking of the movement track of the human hand is to track the position change of a gesture in a picture sequence, so as to obtain the position information of the human hand in continuous time, and the quality of the tracking effect of the movement track of the human hand directly influences the effect of human hand movement recognition. Commonly used motion tracking methods include a particle filter algorithm, a Mean-shift algorithm, a kalman filter method, a skeletal tracking method, and the like.
The target tracking based on the particle filtering is a random search process for obtaining posterior probability estimation of target distribution in a random motion model, and the particle filtering mainly comprises two steps of primary sampling and repeated sampling. The preliminary sampling is to randomly place particles in an image, then calculate the similarity of each particle and the characteristics of a tracking target, and further obtain the weight of each particle. The resampling stage mainly changes the distribution of the particles according to the weight of the particles in the preliminary sampling. And repeating the processes of preliminary sampling and resampling until the target is tracked.
The Mean-shift method (Mean-shift) is a non-parametric probability density gradient estimation algorithm. Firstly, establishing a hand model, namely calculating the probability of characteristic values of pixels belonging to a hand in an initial image frame in a characteristic space; then establishing a model of the current frame, and calculating the probability of characteristic values of all pixels in a region where the human hand possibly exists; and finally, obtaining the human hand mean value drift amount by solving the similarity of the initial human hand model and the human hand model of the current frame. And according to the convergence of the mean shift algorithm, iteratively calculating the mean shift amount of the hand to achieve the aim of converging to the position of the hand in the current image frame.
Kalman filtering is the prediction of the state of a linear system in the present or future using a series of mathematical equations. In tracking the motion trajectory of the human hand, Kalman filtering is mainly used for observing the position information of the human hand in a series of image frames and then predicting the position of the human hand in the next frame. Because the kalman filtering is established on the assumption of posterior probability estimation of each time interval, the kalman filtering method can achieve a better tracking effect in a gaussian distribution environment. The method can remove noise and still obtain a good human hand tracking effect under the gesture deformation.
With the widespread application of Microsoft Kinect, many researchers use the skeletal point tracking specific to Microsoft Kinect sensors to conduct human hand tracking research. Within the field of view of the sensor, the Kinect can provide one or two complete skeletal tracings for the user, i.e. 20 joint points throughout the body. The skeletal point tracking is divided into active tracking and passive tracking, in the active tracking mode, two possible users are selected for tracking in a visual field, in the passive tracking mode, the skeletal points of at most 6 users can be tracked, and the redundant four users only perform position tracking. The principle of Kinect's skeleton tracking is that on the basis of the depth image that obtains, through the method of classifying and machine learning 32 parts of human body, the skeleton joint point information of each part is found.
Since key points of the human hand skeleton can be collected in this step, a human hand motion trajectory tracking method based on skeleton tracking can be preferentially used in the present disclosure. In the disclosure, the moving distance of a key point of a human hand in two consecutive frames of images can be calculated, when the distance is smaller than a preset threshold, the position of the key point is considered to be unchanged, when several preset consecutive frames of the key point are kept unchanged, the position of the hand is identified as the start point or the end point of the human hand, typically, the threshold can be set to 1cm, and when the position of the key point in 6 consecutive frames is not changed, the position of the human hand is taken as the start point or the end point of the human hand. And then calculating the positions of key points in the image frames between the starting point and the end point, wherein the tracks formed by the key points in all the image frames are the motion tracks of the human hand, comparing the motion tracks between the starting point and the end point with a preset motion track for identification, and identifying the motion tracks as the motion of the human hand when the similarity is greater than a preset similarity threshold.
Step S105, controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object.
In this step, when a predetermined human hand motion, such as a fist making, is recognized, display attributes of the virtual object, such as whether to display, a position of the display, a color of the display, a size of the display, a transparency of the display, and the like, are determined. In one embodiment, when the first position of the first hand and the position of the virtual object are smaller than a first threshold, it is determined that the first hand and the virtual object are overlapped, at this time, a first motion of the first hand is recognized, and when the first motion is a predetermined first motion, a display attribute of the first virtual object is controlled. In one embodiment, when the first motion of the first human hand is recognized to be a predetermined first motion, image rendering information is acquired according to the predetermined first motion; and controlling the display attribute of the first virtual object according to the image rendering information.
In a specific embodiment, display attributes of the virtual object, such as whether to display, a display position, a display color, a display size, a display transparency, and the like, may be obtained through the image rendering information. Whether to display or not, controlling whether to display or not of the virtual object, and in one embodiment, when the first action of the first hand is recognized as a predetermined first action, not displaying the virtual object. The display position can be associated with a hand display position, in one embodiment, the hand position is determined by a central point of the external detection frame, the display position of the virtual object can directly coincide with the central point, and at the moment, the central position of the virtual object can coincide with the central point of the external detection frame; or the display position of the virtual object may maintain a certain positional relationship with the central point, for example, the display position of the virtual object may be located at a position 1 length unit forward of the central point Y axis, and the length unit may be a self-defined length unit, for example, 1 length unit is equal to 1cm, and the like, which is not limited herein. In summary, the display position of the virtual object can be determined by a certain relationship. In order to display the position more accurately, key points of the human hand can be added, at this time, a virtual object can be set to be mounted on some key points of the human hand, in one implementation manner, 3 points can be set on the virtual object, the 3 points correspond to the 3 key points of the human hand, and the display position of the virtual object can be determined through the corresponding relation.
In one embodiment, the predetermined first motion may be associated with a color of a virtual object, the color of the virtual object being changed to a predetermined color when the first motion of the first human hand is the predetermined first motion.
In a specific embodiment, the display size information of the virtual object may also be obtained according to the human hand information, for example, the size of the virtual object may be determined by the area of the external detection frame of the human hand, the corresponding relationship between the external detection frame and the size of the virtual object may be preset, or the external detection frame and the size of the virtual object may be set according to the external sizeThe size of the virtual object is dynamically determined according to the size of the detection frame, for example, the size of the virtual object is dynamically determined, the original size of the external detection frame of the human hand which detects the human hand for the first time can be set to be 1, the original size of the virtual object is displayed at the moment, when the human hand moves forwards and backwards relative to the image sensor, the area of the external detection frame changes, for example, the human hand moves backwards, the area of the external detection frame becomes 0.5 times of the area of the external detection frame of the human hand which detects the human hand for the first time, and the virtual object is also zoomed into 0.5 times of the original size; when the human hand moves forwards, the area of the external detection frame is changed to be 2 times of the area of the external detection frame of the human hand which detects the human hand for the first time, the virtual object is also zoomed into 2 times of the original size, and therefore the zooming of the virtual object can be flexibly controlled; of course, the scaling can be controlled by a certain function, for example, the original area of the external detection box is set to be S, and the current area is set to be S1If the virtual object is scaled by R, R ═ S (may be set1/S)2Thus, the scaling of the virtual object is not linear, and more effects can be achieved. Of course, the control function of the scaling can be arbitrarily set according to the requirement, and the above manner is only an example. The human hand information in the display size information of the virtual object obtained according to the human hand information is not limited to the area of the external detection frame, and may also be the side length of the external detection frame, or the distance between the human hand key points, etc., and is not limited herein.
In one embodiment, the acquiring image rendering information according to the predetermined first action includes: reading a rendering configuration file; and acquiring image rendering information from the rendering configuration file by using the preset first action and the first information.
In this embodiment, the rendering configuration file stores a storage path of a sequence frame of a virtual object, a name or an ID of the virtual object is obtained in step S101, the sequence frame of the virtual object can be obtained in the configuration file according to the name or the ID, and all the sequence frames can form a complete virtual object. Specifically, parameters "range" [ idx _ start, idx _ end ] can be set in the rendering configuration file, which represent that the sequence of frames is formed by consecutive files starting from the idx _ start to the idx _ end in the list of files; or setting parameters' idx0, idx1, … …, indicating the file idx0, idx1, … …, etc. in the list of files, and composing the sequence of frames in order.
The rendering configuration file also comprises associated parameters of virtual object positions, the associated parameters describe which hand key points are associated with the sequence frame, all key points can be associated by default, and a plurality of key points can be set to follow. In addition to the association parameters, the rendering configuration file also includes a position relationship parameter "point" of the virtual object and the key point, two groups of association points may be included in the "point" and "point0" represents a first group of association points, and "point1" represents a second group. For each group of associated points, "point" describes the anchor point position in the camera, and is obtained by calculating weighted average of a plurality of groups of key points and weights thereof; the sequence number of the key point is described by using the "idx" field, and for the human hand including the detection box, "toplex", "topright", "bottomleft" and "bottomright" can also be used, which respectively correspond to four corners (or four corners of the screen in the foreground) of the detection box externally connected by the human hand. For example, 4 key points of the virtual object following the human hand are set, namely key points No. 9, No. 10, No. 11 and No. 12, and the weight of each key point is 0.25, wherein the coordinate of each key point is (X) respectively9,Y9),(X10,Y10),(X11,Y11),(X12,Y12) Then, the X-axis coordinate of the anchor point followed by the virtual object can be obtained as Xa=X9*0.25+X10*0.25+X11*0.25+X120.25, the Y-axis coordinate of the anchor point is Ya=Y9*0.25+Y10*0.25+Y11*0.25+Y12*0.25. It is understood that any one set of association points may be included in "point" and is not limited to two sets. In the above specific example, two anchor points may be obtained, and the virtual object moves following the positions of the two anchor points. In practice, however, there may be more than two anchor points, depending on the number of sets of association points used. Wherein the coordinates of each key point can be obtained from the human hand information acquired in step S103. No examples of other rendering parameters are given, and in short, parameters required for rendering an image are stored in the rendering configuration file, and the parameters have a corresponding relationship with the virtual object and the human hand information or need to be obtained by performing certain calculation on the virtual object and the human hand information.
Further, the method may further include a step of setting the rendering configuration file when the rendering configuration file is read, so as to configure rendering parameters in the rendering configuration file.
The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a virtual object and a human hand; acquiring the position of a first virtual object; identifying a first hand in the video to obtain first information of the first hand; identifying a first motion of a first human hand; and controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object in the prior art is not flexible.
As shown in fig. 2, in a second embodiment of the method for controlling a virtual object according to the present disclosure, after the displaying of the first virtual object according to the first motion of the first human hand, the first information of the first human hand, and the position of the first virtual object, the method further includes:
s201, tracking the first hand to obtain second information of the first hand;
s202, identifying a second action of the first hand;
and S203, controlling the display of the first virtual object according to the second action of the first hand and the second information of the first hand.
This embodiment follows the steps of the first embodiment, when the first hand has made the predetermined first motion and the virtual object has been displayed according to the display attribute, when tracking the hand, including tracking the movement of the hand, to obtain the second information of the first hand, where the second information mainly includes the first hand key point and the second position of the first hand, that is, the hand can move, but it is understood that the second position may also be the same as the first position, that is, the hand does not move. In one embodiment, the movement of the first human hand is tracked, and when the first human hand stops moving, the second position of the first human hand and the key points of the first human hand are obtained. The second motion of the first hand is recognized, and the recognition method of the second motion may be the same as that in the previous embodiment, and is not described herein again. And when the second action of the first hand is a preset second action, controlling the display attribute of the virtual object according to the second information of the first hand. Similarly, the display attribute of the virtual object may be controlled according to the rendering information in the configuration file, and the method is the same as that in the previous embodiment, and is not described herein again.
In this embodiment, based on the previous embodiment, the information and motion of the human hand are continuously tracked, and the display of the virtual object is controlled according to the new information and motion of the human hand.
As shown in fig. 3, in a third embodiment of the method for controlling a virtual object according to the present disclosure, on the basis of the first embodiment, the video further includes a second virtual object, and the method further includes:
s301: acquiring the position of a second virtual object;
s302: identifying a second hand in the video to obtain first information of the second hand;
s303: identifying a first motion of a second human hand;
and S304, controlling the display of the second virtual object according to the first action of the second hand, the first information of the second hand and the position of the second virtual object.
The specific implementation manner of steps S301 to S304 may refer to the description in the first embodiment, and is not described herein again. In this embodiment, there are a plurality of human hands and virtual objects, and a plurality of human hands are used to control a plurality of virtual objects simultaneously, typically, a left hand and a right hand are used to perform display control on different virtual objects respectively.
For convenience of understanding, refer to fig. 4a to 4e for specific examples of a control method of a virtual object disclosed in the present disclosure. Referring to fig. 4a, a video is acquired, wherein the video comprises virtual objects including stars, moon, sun and hands; as shown in fig. 4b, the hand moves to one of the stars to make a fist making gesture; as shown in fig. 4c, the stars disappear with the fist making gesture, and the human hand then moves; as shown in fig. 4d, the human hand stops moving and makes a gesture that five fingers open; as shown in fig. 4d, at the position of the human hand, the previously disappeared stars are displayed.
Fig. 5 is a schematic structural diagram of a first embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus 500 includes: a video acquisition module 501, a position acquisition module 502, a human hand information acquisition module 503, a human hand action recognition module 504 and a virtual object control module 505. Wherein the content of the first and second substances,
a video obtaining module 501, configured to obtain a video, where the video includes a virtual object and a human hand;
a position obtaining module 502, configured to obtain a position of the first virtual object;
a human hand information obtaining module 503, configured to identify a first human hand in the video, and obtain first information of the first human hand;
a hand motion recognition module 504 for recognizing a first motion of a first hand;
a virtual object control module 505, configured to control display of the first virtual object according to the first motion of the first human hand, the first information of the first human hand, and the position of the first virtual object.
Further, the human hand information obtaining module 503 includes:
and the human hand recognition module is used for recognizing a first human hand in the video and acquiring a first position of the first human hand and key points of the first human hand.
Further, the human hand motion recognition module 504 is configured to: and recognizing the gesture of the first hand according to the key points of the first hand.
Further, the virtual object control module 505 is configured to:
and when the first position of the first hand and the position of the virtual object are smaller than a first threshold value and the first action of the first hand is a preset first action, controlling the display attribute of the first virtual object.
Further, the virtual object control module 505 is configured to:
acquiring image rendering information according to the preset first action and the first information;
and controlling the display attribute of the first virtual object according to the image rendering information.
Further, the display attributes include: whether to display, the location of the display, the color of the display, the size of the display, and the transparency of the display.
The apparatus shown in fig. 5 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Fig. 6 is a schematic structural diagram of a second embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure, and as shown in fig. 6, the apparatus 600 further includes, in addition to the first embodiment of the control apparatus for a virtual object: the tracking module 601, the human hand motion recognition first module 602 and the virtual object control first module 603.
The tracking module 601 is configured to track the first hand to obtain second information of the first hand;
a human hand motion recognition first module 602 for recognizing a second motion of the first human hand;
the virtual object control first module 603 is configured to control display of the first virtual object according to the second motion of the first human hand and the second information of the first human hand.
Further, the tracking module 601 is configured to track movement of the first hand, and when the first hand stops moving, obtain a second position of the first hand and a key point of the first hand.
Further, the virtual object control first module 603 is configured to, when the second motion of the first human hand is a predetermined second motion, control the display attribute of the virtual object according to the second information of the first human hand.
The apparatus shown in fig. 6 can perform the method of the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
Fig. 7 is a schematic structural diagram of a second embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure, and as shown in fig. 7, the apparatus 700 further includes, in addition to the first embodiment of the control apparatus for a virtual object: a first module 701 for acquiring position, a first module 702 for acquiring hand information, a second module 703 for recognizing hand motion, and a second module 704 for controlling virtual object.
A position acquisition first module 701, configured to acquire a position of a second virtual object;
a hand information acquiring first module 702, configured to identify a second hand in the video to obtain first information of the second hand;
a hand motion recognition second module 703 for recognizing a first motion of a second hand;
the virtual object control second module 704 is configured to control display of the second virtual object according to the first motion of the second human hand, the first information of the second human hand, and the position of the second virtual object.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 3, and reference may be made to the related description of the embodiment shown in fig. 3 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 3, and are not described herein again.
Referring now to FIG. 8, shown is a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, or the like; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. A method for controlling a virtual object, comprising:
acquiring a video, wherein the video comprises a virtual object and a human hand;
acquiring the position of a first virtual object;
identifying a first hand in the video to obtain first information of the first hand;
identifying a first motion of a first human hand;
and controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object.
2. The method for controlling a virtual object according to claim 1, wherein the identifying a first hand in the video and obtaining first information of the first hand comprises:
and identifying a first hand in the video, and acquiring a first position of the first hand and key points of the first hand.
3. The method for controlling a virtual object according to claim 2, wherein the first action of recognizing the first human hand comprises:
and recognizing the gesture of the first hand according to the key points of the first hand.
4. The method for controlling a virtual object according to claim 2, wherein the displaying of the first virtual object according to the first motion of the first human hand, the first information of the first human hand, and the position of the first virtual object includes:
and when the first position of the first hand and the position of the virtual object are smaller than a first threshold value and the first action of the first hand is a preset first action, controlling the display attribute of the first virtual object.
5. The method for controlling a virtual object according to claim 4, wherein said controlling the display property of the first virtual object comprises:
acquiring image rendering information according to the preset first action and the first information;
and controlling the display attribute of the first virtual object according to the image rendering information.
6. The method of controlling a virtual object according to claim 4, wherein the display attribute includes: whether to display, the location of the display, the color of the display, the size of the display, and the transparency of the display.
7. The method for controlling a virtual object according to claim 1, further comprising, after the displaying of the first virtual object according to the first motion of the first human hand, the first information of the first human hand, and the position of the first virtual object:
tracking the first hand to obtain second information of the first hand;
identifying a second motion of the first human hand;
and controlling the display of the first virtual object according to the second action of the first hand and the second information of the first hand.
8. The method for controlling a virtual object according to claim 7, wherein the tracking the first hand to obtain the second information of the first hand comprises:
and tracking the movement of the first hand, and acquiring a second position of the first hand and key points of the first hand when the first hand stops moving.
9. The method for controlling a virtual object according to claim 7, wherein the controlling the display of the first virtual object according to the second information of the first hand and the second motion of the first hand comprises:
and when the second action of the first hand is a preset second action, controlling the display attribute of the virtual object according to the second information of the first hand.
10. The method for controlling a virtual object according to claim 1, wherein a second virtual object is further included in the video, the method further comprising:
acquiring the position of a second virtual object;
identifying a second hand in the video to obtain first information of the second hand;
identifying a first motion of a second human hand;
and controlling the display of the second virtual object according to the first action of the second hand, the first information of the second hand and the position of the second virtual object.
11. An apparatus for controlling a virtual object, comprising:
the video acquisition module is used for acquiring a video, and the video comprises a virtual object and a human hand;
the position acquisition module is used for acquiring the position of the first virtual object;
the human hand information acquisition module is used for identifying a first human hand in the video to obtain first information of the first human hand;
the human hand motion recognition module is used for recognizing a first motion of a first human hand;
and the virtual object control module is used for controlling the display of the first virtual object according to the first action of the first hand, the first information of the first hand and the position of the first virtual object.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the method of controlling a virtual object according to any of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the method of controlling a virtual object according to any one of claims 1 to 10.
CN201811251781.6A 2018-10-25 2018-10-25 Control method and device of virtual object Pending CN111103967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811251781.6A CN111103967A (en) 2018-10-25 2018-10-25 Control method and device of virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811251781.6A CN111103967A (en) 2018-10-25 2018-10-25 Control method and device of virtual object

Publications (1)

Publication Number Publication Date
CN111103967A true CN111103967A (en) 2020-05-05

Family

ID=70418645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811251781.6A Pending CN111103967A (en) 2018-10-25 2018-10-25 Control method and device of virtual object

Country Status (1)

Country Link
CN (1) CN111103967A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050024388A1 (en) * 2003-07-30 2005-02-03 Canon Kabushiki Kaisha Image displaying method and apparatus
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20110261083A1 (en) * 2010-04-27 2011-10-27 Microsoft Corporation Grasp simulation of a virtual object
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20120117514A1 (en) * 2010-11-04 2012-05-10 Microsoft Corporation Three-Dimensional User Interaction
US20130093788A1 (en) * 2011-10-14 2013-04-18 James C. Liu User controlled real object disappearance in a mixed reality display
CN104740869A (en) * 2015-03-26 2015-07-01 北京小小牛创意科技有限公司 True environment integrated and virtuality and reality combined interaction method and system
WO2015123771A1 (en) * 2014-02-18 2015-08-27 Sulon Technologies Inc. Gesture tracking and control in augmented and virtual reality
CN105378593A (en) * 2012-07-13 2016-03-02 索夫特克尼特科软件公司 Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US20160093107A1 (en) * 2013-04-16 2016-03-31 Sony Corporation Information processing apparatus and information processing method, display apparatus and display method, and information processing system
US9383895B1 (en) * 2012-05-05 2016-07-05 F. Vinayak Methods and systems for interactively producing shapes in three-dimensional space
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
US20170069134A1 (en) * 2015-09-09 2017-03-09 Microsoft Technology Licensing, Llc Tactile Interaction In Virtual Environments
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107885316A (en) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 A kind of exchange method and device based on gesture
US20180113505A1 (en) * 2016-10-26 2018-04-26 Htc Corporation Virtual reality interaction method, apparatus and system
US20180158222A1 (en) * 2016-12-01 2018-06-07 Canon Kabushiki Kaisha Image processing apparatus displaying image of virtual object and method of displaying the same
US9996797B1 (en) * 2013-10-31 2018-06-12 Leap Motion, Inc. Interactions with virtual objects for machine control
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN108289180A (en) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 Method, medium and the terminal installation of video are handled according to limb action

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050024388A1 (en) * 2003-07-30 2005-02-03 Canon Kabushiki Kaisha Image displaying method and apparatus
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US20110261083A1 (en) * 2010-04-27 2011-10-27 Microsoft Corporation Grasp simulation of a virtual object
US20120117514A1 (en) * 2010-11-04 2012-05-10 Microsoft Corporation Three-Dimensional User Interaction
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20130093788A1 (en) * 2011-10-14 2013-04-18 James C. Liu User controlled real object disappearance in a mixed reality display
US9383895B1 (en) * 2012-05-05 2016-07-05 F. Vinayak Methods and systems for interactively producing shapes in three-dimensional space
CN105378593A (en) * 2012-07-13 2016-03-02 索夫特克尼特科软件公司 Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US20160093107A1 (en) * 2013-04-16 2016-03-31 Sony Corporation Information processing apparatus and information processing method, display apparatus and display method, and information processing system
US9996797B1 (en) * 2013-10-31 2018-06-12 Leap Motion, Inc. Interactions with virtual objects for machine control
WO2015123771A1 (en) * 2014-02-18 2015-08-27 Sulon Technologies Inc. Gesture tracking and control in augmented and virtual reality
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN104740869A (en) * 2015-03-26 2015-07-01 北京小小牛创意科技有限公司 True environment integrated and virtuality and reality combined interaction method and system
US20170069134A1 (en) * 2015-09-09 2017-03-09 Microsoft Technology Licensing, Llc Tactile Interaction In Virtual Environments
CN107885316A (en) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 A kind of exchange method and device based on gesture
US20180113505A1 (en) * 2016-10-26 2018-04-26 Htc Corporation Virtual reality interaction method, apparatus and system
US20180158222A1 (en) * 2016-12-01 2018-06-07 Canon Kabushiki Kaisha Image processing apparatus displaying image of virtual object and method of displaying the same
CN108273265A (en) * 2017-01-25 2018-07-13 网易(杭州)网络有限公司 The display methods and device of virtual objects
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN108289180A (en) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 Method, medium and the terminal installation of video are handled according to limb action

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
未来3D: ""ManoPong-First ever integration of ARKit with Gesture Reco"", pages 14, Retrieved from the Internet <URL:https://www.bilibili.com/video/av14276051/> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880657A (en) * 2020-07-30 2020-11-03 北京市商汤科技开发有限公司 Virtual object control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110517319B (en) Method for determining camera attitude information and related device
CN110555883B (en) Repositioning method and device for camera attitude tracking process and storage medium
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN109934065B (en) Method and device for gesture recognition
CN111062981B (en) Image processing method, device and storage medium
CN110400304B (en) Object detection method, device, equipment and storage medium based on deep learning
CN108986016B (en) Image beautifying method and device and electronic equipment
CN110069125B (en) Virtual object control method and device
CN109151442B (en) Image shooting method and terminal
CN110072046B (en) Image synthesis method and device
CN111243668B (en) Method and device for detecting molecule binding site, electronic device and storage medium
CN111815754A (en) Three-dimensional information determination method, three-dimensional information determination device and terminal equipment
CN108830186B (en) Text image content extraction method, device, equipment and storage medium
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN111199169A (en) Image processing method and device
CN111738914B (en) Image processing method, device, computer equipment and storage medium
CN113129411A (en) Bionic animation generation method and electronic equipment
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110069126B (en) Virtual object control method and device
WO2020037924A1 (en) Animation generation method and apparatus
CN110941327A (en) Virtual object display method and device
CN111258413A (en) Control method and device of virtual object
CN111601129B (en) Control method, control device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination