CN111258413A - Control method and device of virtual object - Google Patents

Control method and device of virtual object Download PDF

Info

Publication number
CN111258413A
CN111258413A CN201811454118.6A CN201811454118A CN111258413A CN 111258413 A CN111258413 A CN 111258413A CN 201811454118 A CN201811454118 A CN 201811454118A CN 111258413 A CN111258413 A CN 111258413A
Authority
CN
China
Prior art keywords
human hand
virtual objects
video
virtual
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811454118.6A
Other languages
Chinese (zh)
Inventor
罗国中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811454118.6A priority Critical patent/CN111258413A/en
Publication of CN111258413A publication Critical patent/CN111258413A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a human hand; initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video; acquiring positions of the plurality of virtual objects; identifying the human hand in the video to obtain first information of the human hand; identifying a first motion of the human hand; controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object in the prior art is not flexible.

Description

Control method and device of virtual object
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for controlling a virtual object, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an Application program (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. The beautifying function of the intelligent terminal generally comprises beautifying processing effects of skin color adjustment, skin grinding, large eye, face thinning and the like, and can perform beautifying processing of the same degree on all faces recognized in an image. At present, there are also applications that can implement simple functions of displaying virtual objects, for example, displaying a fixed virtual object at a fixed position on a screen, and the virtual object can perform some simple actions.
However, the current virtual object can only be displayed at a fixed position and a fixed time, and if the virtual display attribute needs to be changed, the virtual object itself needs to be directly modified or the virtual object needs to be controlled through a control, so that the control of the virtual object is very inflexible.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides a method for controlling a virtual object, including: acquiring a video, wherein the video comprises a human hand; initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video; acquiring positions of the plurality of virtual objects; identifying the human hand in the video to obtain first information of the human hand; identifying a first motion of the human hand; controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand.
Further, the initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video includes: acquiring a plurality of virtual objects; acquiring initialization positions of the plurality of virtual objects; displaying the plurality of virtual objects in the video according to the initialized position.
Further, the initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video includes: identifying a human hand in the video; reading a configuration file of the plurality of virtual objects in response to recognizing the second action of the human hand; initializing the virtual objects according to the configuration files of the virtual objects, and displaying the virtual objects in the video.
Further, the initializing the plurality of virtual objects according to the configuration files of the plurality of virtual objects and displaying the plurality of virtual objects in the video includes: analyzing the configuration files of the virtual objects, and acquiring the acquisition addresses and the initialization positions of the virtual objects; and acquiring the virtual objects according to the acquisition address, and displaying the virtual objects in the video according to the initialization position.
Further, the identifying a human hand in the video to obtain first information of the human hand includes: and identifying the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand.
Further, the identifying the human hand in the video and acquiring the key point of the human hand and the first position of the human hand include: identifying the human hand in the video and acquiring key points of the human hand; and acquiring a preset key point in the key points of the human hand as a first position of the human hand.
Further, the identifying the first action of the human hand includes: and identifying a motion track of the human hand according to the key points of the human hand, wherein the motion track comprises a starting point position of the human hand and an ending point position of the human hand.
Further, the recognizing the motion trajectory of the human hand according to the key points of the human hand includes:
tracking key points of a human hand; when the first position of the key point is not changed within a preset time, identifying the first position as the starting position of the human hand; identifying movement of the keypoints; and when the second position of the key point is not changed within the preset time, identifying the second position as the end point position of the human hand.
Further, the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand includes:
determining a first virtual object to be controlled according to the first information of the human hand and the positions of the virtual objects; and controlling the display position of the first virtual object according to the first action of the human hand.
Further, the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first information of the human hand includes:
when the first position of the human hand and the position of a second virtual object of the plurality of virtual objects are less than a first threshold, moving the second virtual object from the position of the second virtual object to an end position of the human hand.
In a second aspect, an embodiment of the present disclosure provides a control apparatus for a virtual object, including:
the video acquisition module is used for acquiring a video, wherein the video comprises a hand;
the initialization module is used for initializing a plurality of virtual objects and displaying the virtual objects in the video;
a virtual object position acquisition module for acquiring positions of the plurality of virtual objects;
the human hand recognition module is used for recognizing human hands in the video to obtain first information of the human hands;
the motion recognition module is used for recognizing a first motion of the human hand;
the display control module is used for controlling the display of at least one virtual object in the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects and the first action of the human hand.
Further, the initialization module includes:
the virtual object acquisition module is used for acquiring a plurality of virtual objects;
an initialization position acquisition module for acquiring initialization positions of the plurality of virtual objects;
the first initialization module is used for displaying the virtual objects in the video according to the initialization positions.
Further, the initialization module includes:
the first identification module is used for identifying human hands in the video;
a profile reading module for reading profiles of the plurality of virtual objects in response to identifying a second action of the human hand;
and the second initialization module is used for initializing the virtual objects according to the configuration files of the virtual objects and displaying the virtual objects in the video.
Further, the second initialization module includes:
the configuration file analysis module is used for analyzing the configuration files of the virtual objects to obtain the obtaining addresses and the initialization positions of the virtual objects;
and the second initialization submodule is used for acquiring the virtual objects according to the acquisition address and displaying the virtual objects in the video according to the initialization position.
Further, the human hand recognition module is further configured to: and identifying the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand.
Further, the hand recognition module further includes:
the key point acquisition module is used for identifying the human hand in the video and acquiring key points of the human hand;
and the first position acquisition module is used for acquiring a preset key point in the key points of the human hand as the first position of the human hand.
Further, the action recognition module further includes:
and the motion track recognition module is used for recognizing the motion track of the human hand according to the key points of the human hand, wherein the motion track comprises a starting point position of the human hand and an ending point position of the human hand.
Further, the motion trajectory identification module further includes:
the tracking module is used for tracking preset key points of the human hand;
the starting point position determining module is used for identifying the first position as the starting point position of the human hand when the first position of the preset key point does not change within preset time;
the mobile identification module is used for identifying the movement of the preset key point;
and the terminal position determining module is used for identifying the second position as the end point position of the human hand when the second position of the preset key point does not change within preset time.
Further, the display control module includes:
a virtual object determination module for determining a second virtual object to be controlled according to the first information of the human hand and the positions of the plurality of virtual objects;
and the first display control module is used for controlling the display position of the second virtual object according to the first action of the human hand.
Further, the display control module is further configured to: when the first position of the human hand and the position of a first virtual object of the plurality of virtual objects are less than a first threshold, moving the first virtual object from the position of the first virtual object to an end position of the human hand.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of controlling the virtual object according to any one of the first aspect.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute the control method for the virtual object in any one of the foregoing first aspects.
The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a human hand; initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video; acquiring positions of the plurality of virtual objects; identifying the human hand in the video to obtain first information of the human hand; identifying a first motion of the human hand; controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object is not flexible in the prior art.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical aspects of the present disclosure, the present disclosure may be implemented in accordance with the following description, and the foregoing and other objects, features, and advantages of the present disclosure will be apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of an embodiment of a method for controlling a virtual object according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an example of a control method for a virtual object according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort fall within the scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, number and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of a method for controlling a virtual object according to an embodiment of the present disclosure, where the method for controlling a virtual object according to this embodiment may be executed by a control apparatus of a virtual object, the control apparatus of the virtual object may be implemented as software, or implemented as a combination of software and hardware, and the control apparatus of the virtual object may be integrally disposed in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring a video, wherein the video comprises a hand;
the acquired video may be acquired by an image sensor, which refers to various devices that can capture images, and typical image sensors are video cameras, still cameras, and the like. In the embodiment, the image sensor may be a camera on the mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and a video image acquired by the camera may be directly displayed on a display screen of the smart phone.
The video also comprises a human hand, and the human hand can be a human hand collected by the image sensor.
Step S102, initializing a plurality of virtual objects, and displaying the virtual objects in the video;
the virtual object in the present disclosure may be any 2D or 3D virtual object, typically a virtual weapon such as a virtual sword, a virtual pistol, a virtual pen, a virtual book, a virtual wearable item such as a virtual glove, a virtual ring, etc., a virtual star, a moon, a virtual character, etc., and is not limited in particular, and any virtual object may be incorporated into the present disclosure.
In one embodiment, the initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video includes: acquiring a plurality of virtual objects; acquiring initialization positions of the plurality of virtual objects; and displaying the plurality of virtual objects in the video according to the initialization position. In the embodiment, the obtaining of the plurality of virtual objects may be obtaining the virtual objects from a preset address, and the optional preset address may be a local address or a network address; the acquiring of the initialization positions of the plurality of virtual objects may optionally acquire the initialization position of each of the plurality of virtual objects; optionally, the initialized location areas of the plurality of virtual objects may be obtained; optionally, an initialized position function of the virtual object may also be obtained, where the function is a function related to time; and displaying the plurality of virtual objects on the positions corresponding to the initialization positions in the video according to the initialization positions. When the initialization position is the initialization position of each of the plurality of virtual objects, determining the position of each of the plurality of virtual objects in the video directly according to the initialization position; when the initialization position is an initialization position area of the plurality of virtual objects, the plurality of virtual objects can be initialized randomly in the initialization position area; when the initialized position is the initialized position function of the virtual objects, the initialized position of each virtual object is determined according to the value of the function at the time point 0, and then the position of the virtual object in the video can be calculated by the time elapsed after initialization and the function.
In one embodiment, an event triggering the initialization may be further added to trigger the initialization process of the plurality of virtual objects. In the embodiment, the initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video includes: identifying a human hand in the video; in response to identifying a second action of the human hand, reading a configuration file of the plurality of virtual objects; initializing the virtual objects according to the configuration files of the virtual objects, and displaying the virtual objects in the video. The process of identifying the human hand and the human hand actions will be described in detail in the following steps, and will not be described herein again. In the embodiment, when the human hand is recognized to make the second action, the configuration files of the virtual objects are read, and the virtual objects are initialized according to the configuration files. Wherein initializing the plurality of virtual objects and displaying the plurality of virtual objects in the video according to the configuration files of the plurality of virtual objects comprises: analyzing the configuration files of the virtual objects, and acquiring the acquisition addresses and the initialization positions of the virtual objects; and acquiring the virtual objects according to the acquisition address, and displaying the virtual objects in the video according to the initialization position. After the initialization position is obtained, the initialization process is consistent with the previous process, and is not described again. As an example of the second motion, the second motion may be set as a motion of hitting a table with a fist, and when the motion is recognized, the plurality of virtual objects are displayed at the initialized position in the video.
Step S103, acquiring the positions of the virtual objects;
in the embodiment, the positions of the plurality of virtual objects in the video are acquired. According to an embodiment in step S102, the acquiring the positions of the plurality of virtual objects may include various ways.
Optionally, when the initialization position is the initialization position of each of the plurality of virtual objects, the position of each of the plurality of virtual objects in the video is directly determined according to the initialization position, and at this time, the positions of the plurality of virtual objects in the video are fixed positions, and the positions of the plurality of virtual objects can be directly obtained through the initialization position;
optionally, when the initialization position is an initialization position area of the plurality of virtual objects, the plurality of virtual objects may be initialized randomly in the initialization position area. At this time, since the positions of the plurality of virtual objects are random, the positions of the virtual objects can be determined by identifying the feature point position of each virtual object, specifically, a video frame of the current time can be acquired, the virtual objects in the video frame are identified, the central feature point of the virtual object is identified, and the positions of the plurality of virtual objects are acquired according to the position of the central feature point in the video.
Optionally, when the initialization position is a function of the initialization positions of the plurality of virtual objects, the initialization position of each of the plurality of virtual objects is determined according to a value of the function at a time point 0. At this time, the time elapsed after initialization is recorded, and the time is taken as a parameter and is substituted into the function to calculate the positions of the virtual objects at the current time point.
It can be understood that the manner of acquiring the positions of the plurality of virtual objects in the video is not limited to the above manner, which is merely an example for facilitating understanding, and practically any manner that can acquire the positions of the virtual objects in the video can be applied to the present disclosure, and is not described herein again.
Step S104, identifying the human hand in the video to obtain first information of the human hand;
in one embodiment, the identifying the human hand in the video, obtaining first information of the human hand, includes: and identifying the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand. Wherein the identifying the human hand in the video, obtaining the key point of the human hand and the first position of the human hand, comprises: identifying the human hand in the video and acquiring key points of the human hand; and acquiring a preset key point in the key points of the human hand as a first position of the human hand.
When the human hand is recognized, the position of the human hand can be positioned by using the color features, the human hand is segmented from the background, and feature extraction and recognition are carried out on the found and segmented human hand image. Specifically, color information of an image and position information of the color information are acquired by using an image sensor; comparing the color information with preset hand color information; identifying first color information, wherein the error between the first color information and the preset human hand color information is smaller than a first threshold value; and forming the outline of the human hand by using the position information of the first color information. Preferably, in order to avoid interference of ambient brightness to color information, image data of an RGB color space acquired by the image sensor may be mapped to an HSV color space, information in the HSV color space is used as contrast information, and preferably, a hue value in the HSV color space is used as color information, so that the hue information is minimally affected by brightness, and interference of brightness may be well filtered. The position of the human hand is roughly determined by using the human hand outline, and then the key point extraction is carried out on the human hand. The method comprises the steps of extracting key points of a human hand on an image, equivalently searching corresponding position coordinates of each human hand outline key point in a human hand image, namely key point positioning, wherein the process needs to be carried out based on the corresponding characteristics of the key points, searching and comparing in the image according to the characteristics after the image characteristics capable of clearly identifying the key points are obtained, and accurately positioning the positions of the key points on the image. Since the keypoints only occupy a very small area (usually only a few to tens of pixels) in the image, the regions occupied by the features corresponding to the keypoints on the image are usually very limited and local, and there are two feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image characteristics of the key point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the key points used by the various implementation methods are different, and the method is suitable for different application scenes. Similarly, for other target objects, the same principle can be used to identify the target object.
After the human hand is recognized, a polygon is defined outside the outer contour of the human hand to serve as an external detection frame of the human hand, the external detection frame is used for replacing the human hand and describing the position of the human hand, a rectangle is taken as an example, after key points of the human hand are recognized, the width of the widest part of the human hand and the length of the longest part of the human hand can be calculated, and the external detection frame of the human hand is recognized according to the width and the length. One implementation of calculating the longest and widest points of the human hand is to extract the boundary key points of the human hand, calculate the difference between the X coordinates of the two boundary key points with the farthest X coordinate distance as the length of the rectangle width, and calculate the difference between the Y coordinates of the two boundary key points with the farthest Y coordinate distance as the length of the rectangle length. If the hand contracts into a fist shape, the external detection frame can be set to be a minimum circle covering the fist. Specifically, the center point of the external detection frame can be used as the position of the hand, and the center point of the external detection frame is the intersection point of the diagonals of the external detection frame; the position of the fist can also be replaced by the centre of the circle.
The human hand information further includes detected human hand key points, the number of the key points may be set, and generally, the human hand information may include key points and joint key points of a human hand contour, each key point may have a fixed number, for example, the key points may be numbered from top to bottom according to the sequence of the contour key point, the thumb joint key point, the index finger joint key point, the middle finger joint key point, the ring finger joint key point, and the little finger joint key point, and in a typical application, the number of the key points is 22, and each key point has a fixed number. In one embodiment, the location of the human hand may also be represented using a key point in the center of the palm. In one embodiment, the preset key points can be set to represent the position of the human hand, and typically, the key points of the index finger can be used as the determination points of the position of the human hand, and when the user's hand clicks and moves in the video, the positions of the clicking and moving are determined by the key points of the extended index finger.
In one embodiment, before calculating the human hand information, the method further comprises the step of performing smoothing and coordinate normalization processing on the identification data of the human hand. Specifically, the smoothing process may be averaging images in the multi-frame video, taking the averaged image as an identified image, and corresponding to a human hand in the present disclosure, and may identify the human hand in the multi-frame image, then performing weighted averaging on the human hand image, taking the human hand image obtained after averaging as the identified human hand, and calculating the human hand information. The coordinate normalization processing is to unify the coordinate range, and if the coordinates of the hand image collected by the camera and the hand image displayed on the display screen are not unified, a mapping relation is needed to map the large coordinate system to the small coordinate system. And obtaining the information of the human hand after smoothing processing and normalization processing.
Step S105, recognizing a first action of the hand;
in one embodiment, the first action of identifying the human hand comprises: and identifying a motion track of the human hand according to the key points of the human hand, wherein the motion track comprises a starting point position of the human hand and an ending point position of the human hand. Wherein, according to the key point of the human hand, identifying the motion trail of the human hand comprises: tracking preset key points of the human hand; when the first position of the preset key point does not change within a preset time, identifying the first position as the starting position of the human hand; recognizing the movement of the preset key point; and when the second position of the preset key point does not change within a preset time, identifying the second position as the end point position of the human hand.
The human hand motion can comprise a gesture and/or a motion track of the human hand;
the gesture recognition may be performed by using the hand image information obtained in step S103, and placing the hand image information into a deep learning model for recognition, for example, the hand gesture is recognized by inputting the hand key point information into the deep learning model, which is not described herein again.
In the step, the motion of the human hand can be recognized, the motion track of the human hand is recorded by the motion of the human hand, and the motion track is analyzed to recognize the motion. Specifically, the movement of the human hand needs to be tracked firstly by recording the movement track of the human hand, in the human hand movement recognition system based on vision, the tracking of the movement track of the human hand is to track the position change of a gesture in a picture sequence, so as to obtain the position information of the human hand in continuous time, and the quality of the tracking effect of the movement track of the human hand directly influences the effect of human hand movement recognition. Commonly used motion tracking methods include particle filter algorithm, Mean-shift algorithm, kalman filter method, skeletal tracking method, and the like.
The target tracking based on the particle filtering is a random search process for obtaining posterior probability estimation of target distribution in a random motion model, and the particle filtering mainly comprises two steps of primary sampling and repeated sampling. The preliminary sampling is to randomly place particles in an image, then calculate the similarity of each particle and the characteristics of a tracking target, and further obtain the weight of each particle. The resampling stage mainly changes the distribution condition of the particles according to the weight of the particles in the preliminary sampling. And repeating the processes of preliminary sampling and resampling until the target is tracked.
The Mean-shift method (Mean-shift) is a non-parametric probability density gradient estimation algorithm. In the human hand action identification, the basic idea of tracking the human hand by utilizing the Mean-shift algorithm is that firstly, a human hand model is established, namely the probability of the characteristic value of the pixels belonging to the hand in the initial image frame in the characteristic space is calculated; then establishing a model of the current frame, and calculating the probability of characteristic values of all pixels in a region where the human hand may exist; and finally, obtaining the human hand mean value drift amount by solving the similarity of the initial human hand model and the human hand model of the current frame. And according to the convergence of the mean shift algorithm, iteratively calculating the mean shift amount of the hand to achieve the aim of converging to the position of the hand in the current image frame.
Kalman filtering is the prediction of the state of a linear system in the present or future using a series of mathematical equations. In tracking the motion trajectory of the human hand, Kalman filtering is mainly used for observing the position information of the human hand in a series of image frames and then predicting the position of the human hand in the next frame. Because the kalman filtering is established on the assumption of posterior probability estimation of each time interval, the kalman filtering method can achieve better tracking effect in a gaussian distribution environment. The method can remove noise and still obtain a good human hand tracking effect under the gesture deformation.
With the widespread application of Microsoft Kinect, many researchers use the skeletal point tracking specific to Microsoft Kinect sensors to conduct human hand tracking research. Within the field of view of the sensor, the Kinect can provide one or two users with complete skeletal tracking, i.e. tracking of 20 joint points throughout the body. The skeletal point tracking is divided into active tracking and passive tracking, in the active tracking mode, two possible users are selected for tracking in a visual field, in the passive tracking mode, the skeletal points of at most 6 users can be tracked, and the redundant four users only perform position tracking. The principle of Kinect's bone tracking is to find out the bone joint point information of each part by classifying 32 parts of a human body and machine learning on the basis of an acquired depth image.
Since the key points of the human hand skeleton can be collected in the steps, the human hand motion trajectory tracking method based on skeleton tracking can be preferentially used in the present disclosure. In the disclosure, the moving distance of a key point of a human hand in two consecutive frames of images can be calculated, when the distance is smaller than a preset threshold, the position of the key point is considered to be unchanged, when several preset consecutive frames of the key point keep the position unchanged, the position of the hand is identified as the start point or the end point of the human hand, typically, the threshold can be set to 1cm, and when the position of the key point in 6 consecutive frames does not change, the position of the human hand is taken as the start point or the end point of the human hand. And then calculating the positions of key points in the image frames between the starting point and the end point, wherein the tracks formed by the key points in all the image frames are the motion tracks of the human hand, comparing the motion tracks between the starting point and the end point with a preset motion track for identification, and identifying the motion tracks as the motion of the human hand when the similarity is greater than a preset similarity threshold.
Step S105, controlling the display of at least one virtual object in the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects and the first motion of the human hand.
In one embodiment, the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first information of the human hand comprises: when the first position of the human hand and the position of a first virtual object of the plurality of virtual objects are less than a first threshold, moving the first virtual object from the position of the first virtual object to an end position of the human hand.
In one embodiment, the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first action of the human hand comprises: determining a second virtual object to be controlled according to the first information of the human hand and the positions of the plurality of virtual objects; and controlling the display position of the second virtual object according to the first action of the human hand.
In a specific embodiment, when the first position of the human hand and the position of the virtual object are smaller than a first threshold value, the human hand and the virtual object are judged to be overlapped, and at the moment, the motion track of the human hand is identified to control the display position of the virtual object.
The display position can be associated with a hand display position, in one embodiment, the hand position is determined by a central point of the external detection frame, the display position of the virtual object can directly coincide with the central point, and at the moment, the central position of the virtual object can coincide with the central point of the external detection frame; or the display position of the virtual object may be in a certain positional relationship with the central point, for example, the display position of the virtual object may be located at a position 1 length unit forward of the central point Y axis, and the length unit may be a self-defined length unit, for example, 1 length unit is equal to 1cm, and the like, which is not limited herein. In summary, the display position of the virtual object can be determined by a certain relationship. In order to display the position more accurately, key points of the human hand can be added, at the moment, a virtual object can be arranged to be mounted on a certain number of key points of the human hand, in one implementation mode, 3 points can be arranged on the virtual object, the 3 points correspond to the 3 key points of the human hand, and the display position of the virtual object can be determined through the corresponding relation.
In one embodiment, the display size information of the virtual object may also be obtained according to the human hand information, for example, the size of the virtual object may be determined by the area of the circumscribed detection frame of the human hand, the corresponding relationship between the circumscribed detection frame and the size of the virtual object may be preset, or dynamically determining the size of the virtual object according to the size of the circumscribed detection frame, taking the dynamic determination of the size of the virtual object as an example, the original size of the circumscribed detection frame of the human hand, in which the human hand is detected for the first time, may be set to 1, at which time the original size of the virtual object is displayed, and when the human hand moves back and forth relative to the image sensor, if the area of the external detection frame changes, for example, the human hand moves backwards, and the area of the external detection frame is 0.5 times of the area of the external detection frame of the human hand which detects the human hand for the first time, the virtual object is also zoomed into 0.5 times of the original size; when the human hand moves forwards, the area of the external detection frame is changed to be 2 times of the area of the external detection frame of the human hand which detects the human hand for the first time, the virtual object is also zoomed into 2 times of the original size, and therefore the zooming of the virtual object can be flexibly controlled; of course, the scaling ratio may be controlled by a certain function, for example, setting the original area of the circumscribed detection frame as S, the current area as S1, and setting the scaling ratio of the virtual object as R, then R may be set to (S1/S)2, so that the scaling of the virtual object is not linear, and more effects may be achieved. Of course, the control function of the scaling can be arbitrarily set according to the requirement, and the above manner is only an example. The human hand information in the display size information of the virtual object obtained according to the human hand information is not limited to the area of the external detection frame, and may also be the edge length of the external detection frame, or the distance between the key points of the human hand, and the like, and is not limited herein. Through the steps, the distance effect of the virtual object relative to the screen can be controlled.
The disclosure discloses a control method and device of a virtual object, an electronic device and a computer readable storage medium. The control method of the virtual object comprises the following steps: acquiring a video, wherein the video comprises a human hand; initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video; acquiring the positions of the plurality of virtual objects; identifying the human hand in the video to obtain first information of the human hand; identifying a first motion of the human hand; controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand. The embodiment of the disclosure directly controls the display attribute of the displayed virtual object through the action of the human hand and the information of the human hand, and solves the technical problem that the display control of the virtual object in the prior art is not flexible.
For convenience of understanding, reference is made to fig. 2a-2h for specific examples of a method of controlling a virtual object as disclosed in the present disclosure. Referring to fig. 2a, the video acquired in step S101 includes images of human hands; referring to fig. 2b, a schematic diagram of initialization of a virtual object is shown, in the example, a manner of triggering initialization of the virtual object by using a trigger event in step S102 is used, in the example, an action of manually shooting a desktop is identified, the action is used as a trigger action of virtual object initialization to trigger initialization of the virtual object to be displayed in the video, as shown in fig. 2c, the initialized virtual object is shown, and in the example, the virtual object is virtual characters; as shown in fig. 2d-2e, then recognizing the position of the human hand and the motion of the human hand, when the position of the human hand is overlapped with one of the virtual characters and the position of the human hand does not move in the multi-frame video, selecting the virtual character as a selected virtual character, in the specific example, selecting a special character, then recognizing the moving track of the human hand, wherein the special character moves according to the moving track of the human hand, and when it is judged that the position of the human hand does not change in the multi-frame image after moving to the second position, moving the special character to the second position; as shown in fig. 2f-2h, in order to repeat the above steps to move the second virtual object, the finger leaves the second position, the human hand is re-identified, the above steps are repeated, the "effect" character is moved, and finally the "special effect" and the "effect" are combined into the word "special effect" in the video.
Fig. 3 is a schematic structural diagram of an embodiment of a control apparatus for a virtual object according to an embodiment of the present disclosure, and as shown in fig. 3, the apparatus 300 includes: a video acquisition module 301, an initialization module 302, a virtual object position acquisition module 303, a human hand recognition module 304, an action recognition module 305, and a display control module 306. Wherein the content of the first and second substances,
the video acquisition module 301 is configured to acquire a video, where the video includes a human hand;
the initialization module 302 is configured to initialize a plurality of virtual objects and display the plurality of virtual objects in the video;
a virtual object position obtaining module 303, configured to obtain positions of the plurality of virtual objects;
the human hand recognition module 304 is used for recognizing human hands in the video to obtain first information of the human hands;
a motion recognition module 305 for recognizing a first motion of the human hand;
a display control module 306, configured to control display of at least one virtual object of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand.
Further, the initialization module 302 includes:
the virtual object acquisition module is used for acquiring a plurality of virtual objects;
an initialization position acquisition module for acquiring initialization positions of the plurality of virtual objects;
the first initialization module is used for displaying the virtual objects in the video according to the initialization positions.
Further, the initialization module 302 includes:
the first identification module is used for identifying human hands in the video;
a profile reading module for reading profiles of the plurality of virtual objects in response to identifying a second action of the human hand;
and the second initialization module is used for initializing the virtual objects according to the configuration files of the virtual objects and displaying the virtual objects in the video.
Further, the second initialization module includes:
the configuration file analysis module is used for analyzing the configuration files of the virtual objects to obtain the obtaining addresses and the initialization positions of the virtual objects;
and the second initialization submodule is used for acquiring the virtual objects according to the acquisition address and displaying the virtual objects in the video according to the initialization position.
Further, the human hand recognition module 304 is further configured to:
and identifying the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand.
Further, the human hand recognition module 304 further includes:
the key point acquisition module is used for identifying the human hand in the video and acquiring key points of the human hand;
and the first position acquisition module is used for acquiring a preset key point in the key points of the human hand as the first position of the human hand.
Further, the action recognition module 305 further includes:
and the motion track recognition module is used for recognizing the motion track of the human hand according to the key points of the human hand, wherein the motion track comprises a starting point position of the human hand and an ending point position of the human hand.
Further, the motion trajectory identification module further includes:
the tracking module is used for tracking preset key points of the human hand;
the starting point position determining module is used for identifying the first position as the starting point position of the human hand when the first position of the preset key point does not change within preset time;
the mobile identification module is used for identifying the movement of the preset key point;
and the terminal position determining module is used for identifying the second position as the end point position of the human hand when the second position of the preset key point does not change within preset time.
Further, the display control module 306 includes:
a virtual object determination module for determining a second virtual object to be controlled according to the first information of the human hand and the positions of the plurality of virtual objects;
and the first display control module is used for controlling the display position of the second virtual object according to the first action of the human hand.
Further, the display control module 306 is further configured to:
when the first position of the human hand and the position of a first virtual object of the plurality of virtual objects are less than a first threshold, moving the first virtual object from the position of the first virtual object to an end position of the human hand.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of the embodiment not described in detail. The implementation process and technical effect of the technical solution are described in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 409, or from the storage means 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the above features or their equivalents is encompassed without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. A virtual object control method, comprising:
acquiring a video, wherein the video comprises a human hand;
initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video;
acquiring positions of the plurality of virtual objects;
identifying the human hand in the video to obtain first information of the human hand;
identifying a first motion of the human hand;
controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand.
2. The method for controlling a virtual object according to claim 1, wherein initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video comprises:
acquiring a plurality of virtual objects;
acquiring initialization positions of the plurality of virtual objects;
and displaying the plurality of virtual objects in the video according to the initialization position.
3. The method for controlling a virtual object according to claim 1, wherein initializing a plurality of virtual objects and displaying the plurality of virtual objects in the video comprises:
identifying a human hand in the video;
reading a configuration file of the plurality of virtual objects in response to recognizing the second action of the human hand;
initializing the virtual objects according to the configuration files of the virtual objects, and displaying the virtual objects in the video.
4. The virtual object control method according to claim 3, wherein initializing the plurality of virtual objects and displaying the plurality of virtual objects in the video according to the configuration files of the plurality of virtual objects comprises:
analyzing the configuration files of the virtual objects, and acquiring the acquisition addresses and the initialization positions of the virtual objects;
and acquiring the virtual objects according to the acquisition address, and displaying the virtual objects in the video according to the initialization position.
5. The method for controlling a virtual object according to claim 1, wherein the recognizing a human hand in the video to obtain first information of the human hand comprises:
and identifying the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand.
6. The method for controlling the virtual object according to claim 1, wherein the recognizing the human hand in the video, and acquiring the key point of the human hand and the first position of the human hand comprises:
identifying the human hand in the video and acquiring key points of the human hand;
and acquiring a preset key point in the key points of the human hand as a first position of the human hand.
7. The method for controlling a virtual object according to claim 5, wherein the first action of recognizing the human hand comprises:
and identifying a motion track of the human hand according to the key points of the human hand, wherein the motion track comprises a starting point position of the human hand and an ending point position of the human hand.
8. The method for controlling a virtual object according to claim 7, wherein the recognizing the motion trajectory of the human hand according to the key points of the human hand comprises:
tracking key points of a human hand;
when the first position of the key point is not changed within a preset time, identifying the first position as the starting position of the human hand;
identifying movement of the keypoints;
when the second position of the key point is not changed within a preset time, identifying the second position as the end point position of the human hand.
9. The method for controlling a virtual object according to claim 1, wherein the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand comprises:
determining a first virtual object to be controlled according to the first information of the human hand and the positions of the virtual objects;
and controlling the display position of the first virtual object according to the first action of the human hand.
10. The method of controlling the virtual object according to claim 7, wherein the controlling the display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first information of the human hand comprises:
when the first position of the human hand and the position of a second virtual object of the plurality of virtual objects are less than a first threshold, moving the second virtual object from the position of the second virtual object to an end position of the human hand.
11. A virtual object control apparatus, comprising:
the video acquisition module is used for acquiring a video, wherein the video comprises a hand;
the initialization module is used for initializing a plurality of virtual objects and displaying the virtual objects in the video;
a virtual object position acquisition module for acquiring positions of the plurality of virtual objects;
the human hand recognition module is used for recognizing human hands in the video to obtain first information of the human hands;
the motion recognition module is used for recognizing a first motion of the human hand;
a display control module for controlling display of at least one of the plurality of virtual objects according to the first information of the human hand, the positions of the plurality of virtual objects, and the first motion of the human hand.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the method of controlling a virtual object according to any of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the method of controlling a virtual object according to any one of claims 1 to 10.
CN201811454118.6A 2018-11-30 2018-11-30 Control method and device of virtual object Pending CN111258413A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811454118.6A CN111258413A (en) 2018-11-30 2018-11-30 Control method and device of virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811454118.6A CN111258413A (en) 2018-11-30 2018-11-30 Control method and device of virtual object

Publications (1)

Publication Number Publication Date
CN111258413A true CN111258413A (en) 2020-06-09

Family

ID=70944527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811454118.6A Pending CN111258413A (en) 2018-11-30 2018-11-30 Control method and device of virtual object

Country Status (1)

Country Link
CN (1) CN111258413A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199016A (en) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114399536A (en) * 2022-01-19 2022-04-26 北京百度网讯科技有限公司 Virtual human video generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104740869A (en) * 2015-03-26 2015-07-01 北京小小牛创意科技有限公司 True environment integrated and virtuality and reality combined interaction method and system
CN105378593A (en) * 2012-07-13 2016-03-02 索夫特克尼特科软件公司 Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378593A (en) * 2012-07-13 2016-03-02 索夫特克尼特科软件公司 Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
CN104740869A (en) * 2015-03-26 2015-07-01 北京小小牛创意科技有限公司 True environment integrated and virtuality and reality combined interaction method and system
CN107343211A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Method of video image processing, device and terminal device
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN108537867A (en) * 2018-04-12 2018-09-14 北京微播视界科技有限公司 According to the Video Rendering method and apparatus of user's limb motion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199016A (en) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114399536A (en) * 2022-01-19 2022-04-26 北京百度网讯科技有限公司 Virtual human video generation method and device

Similar Documents

Publication Publication Date Title
CN110517319B (en) Method for determining camera attitude information and related device
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN108615248B (en) Method, device and equipment for relocating camera attitude tracking process and storage medium
CN110555883B (en) Repositioning method and device for camera attitude tracking process and storage medium
WO2019205853A1 (en) Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
CN104350509B (en) Quick attitude detector
CN110188719B (en) Target tracking method and device
CN110072046B (en) Image synthesis method and device
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
US20150302587A1 (en) Image processing device, image processing method, program, and information recording medium
EP3968131A1 (en) Object interaction method, apparatus and system, computer-readable medium, and electronic device
CN110069125B (en) Virtual object control method and device
CN111062981A (en) Image processing method, device and storage medium
CN111199169A (en) Image processing method and device
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN108781252A (en) A kind of image capturing method and device
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN111160308A (en) Gesture motion recognition method, device, equipment and readable storage medium
CN111258413A (en) Control method and device of virtual object
CN110069126B (en) Virtual object control method and device
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
WO2020037924A1 (en) Animation generation method and apparatus
CN110941327A (en) Virtual object display method and device
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN110197459B (en) Image stylization generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination