CN111639612A - Posture correction method and device, electronic equipment and storage medium - Google Patents

Posture correction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111639612A
CN111639612A CN202010501035.9A CN202010501035A CN111639612A CN 111639612 A CN111639612 A CN 111639612A CN 202010501035 A CN202010501035 A CN 202010501035A CN 111639612 A CN111639612 A CN 111639612A
Authority
CN
China
Prior art keywords
bone
posture
target
user
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010501035.9A
Other languages
Chinese (zh)
Inventor
潘思霁
揭志伟
李炳泽
刘小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010501035.9A priority Critical patent/CN111639612A/en
Publication of CN111639612A publication Critical patent/CN111639612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for posture correction, wherein the method comprises: acquiring a user imitation image of a target user imitation target object posture in a target detection area; determining impersonation pose information of the target user based on the user impersonation image; determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object; and generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.

Description

Posture correction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for posture correction, an electronic device, and a storage medium.
Background
The sculpture is an ornamental and monument with certain meaning, symbol and connotation, and the sculpture has characteristics such as lively and directly perceived, strong visual impact, and the personage sculpture is a sculpture that can reflect social culture and breath, describes personage form, and the personage sculpture can be widely shown in various places such as science and technology museum, exhibition hall.
Generally, the motion, expression, etc. of the human sculpture have a specific meaning, and in order to have a deep understanding of the human sculpture, a user can imitate the motion of the human sculpture and deeply understand the human sculpture by imitating the motion of the human sculpture. However, there is a difference between the user's imitation motion of the human sculpture and the motion of the human sculpture, and when the difference is large, it may not be possible to understand the essence of the human sculpture carefully and accurately.
Disclosure of Invention
In view of the above, the present disclosure provides at least a method, an apparatus, an electronic device and a storage medium for posture correction.
In a first aspect, the present disclosure provides a method of posture correction, comprising:
acquiring a user imitation image of a target user imitation target object posture in a target detection area;
determining impersonation pose information of the target user based on the user impersonation image;
determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object;
and generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
By adopting the method, the imitation attitude information of the target user and the bone attitude image of the target user are determined through the acquired user imitation image; determining skeletal posture matching information between the target user and the target object based on the simulated posture information and the posture information of the target object; the bone posture matching information comprises posture matching information corresponding to each bone displayed in the bone posture image; according to the bone posture matching information, a bone posture image of the target user is generated and displayed, wherein bones with unmatched bone postures and bones with matched bone postures are distinguished in the updated bone posture image, so that the user can check bones with unmatched postures, such as leg bone postures or hand bone postures, and further can correct the unmatched bone postures, so that the user can know a target object (such as a figure sculpture) more carefully and accurately, the display effect of the target object is improved, and the interaction between the user and the displayed target object is realized.
In one possible embodiment, generating and presenting a bone posture image of a target user according to the bone posture matching information includes:
and respectively displaying bones with unmatched bone postures and bones with matched bone postures in different colors in the generated and displayed bone posture image of the target user.
In one possible embodiment, obtaining a user-simulated image of a target user-simulated target object pose located within a target detection region includes:
continuously acquiring user imitation images of the target user imitating the posture of the target object;
generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein the bone posture image comprises:
and controlling a display device corresponding to the target detection area, and continuously displaying the generated bone posture image of the target user.
In the above embodiment, the user simulated images are continuously acquired, the bone posture images of the target user are continuously updated and displayed on the display device corresponding to the target detection area, and the bone posture images of the target user are updated in real time, so that the target user can adjust the bones with unmatched postures according to the bone posture images updated in real time, the simulated postures of the target user are consistent with the postures of the target object, and the display effect simulated by the user is improved.
In one possible embodiment, determining the simulated pose information of the target user based on the user simulated image includes:
determining key point information corresponding to each bone of the target user based on the user simulated image;
and determining the simulated posture information of the target user based on the key point information corresponding to each skeleton of the target user.
In a possible embodiment, the method further comprises:
establishing a virtual three-dimensional model corresponding to the target object, wherein the posture of the virtual three-dimensional model is the same as that of the target object, calculating various posture characteristic information corresponding to each bone in the virtual three-dimensional model, and storing the various posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object;
determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object, comprising:
for each bone shown in the bone posture image, determining posture matching information corresponding to the bone based on feature information under various posture features corresponding to the bone in the simulated posture information and various posture feature information of the bone in the virtual three-dimensional model stored in advance;
and determining the posture matching information corresponding to each bone as the bone posture matching information between the target user and the target object.
In a possible embodiment, for each bone shown in the bone pose image, determining pose matching information corresponding to the bone based on feature information under a plurality of pose features corresponding to the bone in the simulated pose information and a plurality of pose feature information of the bone in the virtual three-dimensional model stored in advance includes:
determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the multiple posture characteristics of the skeleton, and the characteristic information under the multiple posture characteristics corresponding to the skeleton in the simulated posture information and the multiple posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
In the above embodiment, a corresponding weight may be set for each of the plurality of posture features, a weight of a feature in which the posture feature is more important may be set to be larger, a weight of a feature in which the posture feature is less important may be set to be smaller, and the posture matching information corresponding to the bone may be determined more accurately based on the weights corresponding to the plurality of posture features of the bone, and the feature information under the plurality of posture features corresponding to the bone in the simulated posture information and the plurality of posture feature information corresponding to the bone in the virtual three-dimensional model.
In one possible embodiment, before acquiring the user-simulated image of the target user-simulated target object pose located in the target detection area, the method includes:
responding to preset trigger operation of a target user, and controlling display equipment corresponding to the target detection area to display at least one stored image of the target object; and/or the presence of a gas in the gas,
and under the condition that a plurality of displayed target objects are available, responding to an object selection operation triggered by a target user, and controlling display equipment corresponding to the target detection area to display the image of the target object selected by the target user so that the target user can simulate the target object displayed by the display equipment.
In one possible embodiment, obtaining a user-simulated image of a target user-simulated target object pose located within a target detection region includes:
and acquiring a user simulation image of a target user simulating the posture of the target object in the target detection area after receiving a simulation request triggered by the target user or detecting that the target user exists in the preset area.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides a posture correction device, comprising:
the acquisition module is used for acquiring a user imitation image of a target user imitation target object posture in a target detection area;
a first determination module to determine mimicking pose information of the target user based on the user mimicking image;
a second determination module to determine skeletal pose matching information between the target user and the target object based on the mimicking pose information and pose information of the target object;
and the generating module is used for generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
In one possible embodiment, the generating module, when generating and presenting the bone posture image of the target user according to the bone posture matching information, is configured to:
and in the generated and displayed bone posture image of the target user, different colors are adopted to respectively display bones with unmatched bone postures and bones with matched bone postures.
In one possible embodiment, the acquiring module, when acquiring a user-simulated image of a target user-simulated target object pose located within a target detection region, is configured to:
continuously acquiring user imitation images of the target user imitating the posture of the target object;
the generating module is used for generating and displaying the bone posture image of the target user according to the bone posture matching information, and is used for:
and controlling a display device corresponding to the target detection area, and continuously displaying the generated bone posture image of the target user.
In one possible embodiment, the first determination module, when determining the mimicked pose information of the target user and the bone pose image of the target user based on the user mimicked image, is configured to:
determining key point information corresponding to each bone of the target user based on the user simulated image;
and determining the imitation posture information of the target user and the bone posture image of the target user based on the key point information corresponding to each bone of the target user.
In a possible embodiment, the apparatus further comprises:
the calculation module is used for establishing a virtual three-dimensional model corresponding to the target object, wherein the posture of the virtual three-dimensional model is the same as that of the target object, calculating various posture characteristic information corresponding to each bone in the virtual three-dimensional model, and storing the various posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object;
a second determination module, when determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object, to:
for each bone shown in the bone posture image, determining posture matching information corresponding to the bone based on feature information under various posture features corresponding to the bone in the simulated posture information and various posture feature information of the bone in the virtual three-dimensional model stored in advance;
and determining the posture matching information corresponding to each bone as the bone posture matching information between the target user and the target object.
In one possible embodiment, the second determining module, when determining, for each bone shown in the bone pose image, pose matching information corresponding to the bone based on feature information under a plurality of pose features corresponding to the bone in the simulated pose information and a plurality of pose feature information of the bone in the virtual three-dimensional model stored in advance, is configured to:
determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the multiple posture characteristics of the skeleton, and the characteristic information under the multiple posture characteristics corresponding to the skeleton in the simulated posture information and the multiple posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
In one possible embodiment, before acquiring the user-simulated image of the target user-simulated target object pose located in the target detection area, the method includes:
the first display module is used for responding to preset trigger operation of a target user and controlling display equipment corresponding to the target detection area to display at least one stored image of the target object; and/or the presence of a gas in the gas,
and the second display module is used for responding to an object selection operation triggered by a target user when a plurality of displayed target objects are displayed, and controlling the display equipment corresponding to the target detection area to display the image of the target object selected by the target user so that the target user can simulate the target object displayed by the display equipment.
In one possible embodiment, the acquiring module, when acquiring a user-simulated image of a target user-simulated target object pose located within a target detection region, is configured to:
and acquiring a user simulation image of a target user simulating the posture of the target object in the target detection area after receiving a simulation request triggered by the target user or detecting that the target user exists in the preset area.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of posture correction as described in the first aspect or any of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of posture correction as described in the first aspect or any one of the embodiments above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flow chart of a method for posture correction provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a bone pose image in a pose correction method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic interface diagram of a display device in a posture correction method provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an architecture of a posture correction apparatus provided by an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device 500 provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Generally, various artworks can be displayed in an exhibition hall, for example, sculptures, wax images and the like can be displayed in the exhibition hall. The action, expression and the like of the figure sculpture have specific meanings, so that in order to deeply understand the figure sculpture, a user can simulate the action of the figure sculpture and deeply understand the figure sculpture by simulating the action of the figure sculpture. However, there is a difference between the user's imitation motion of the human sculpture and the motion of the human sculpture, and when the difference is large, it may not be possible to understand the essence of the human sculpture carefully and accurately.
In order to solve the above problem, an embodiment of the present disclosure provides a posture correction method, where a skeleton posture image is generated and displayed, and in the generated skeleton posture image, a skeleton with an unmatched skeleton posture and a skeleton with a matched skeleton posture are distinguished, so that a user can check the skeleton with an unmatched posture, for example, the unmatched skeleton posture of a leg or the unmatched skeleton posture of a hand can be checked, and further the unmatched skeleton posture can be corrected, so that the user can know a target object (such as a character sculpture) more carefully and accurately, the display effect of the target object is improved, and interaction between the user and the displayed target object is achieved.
For the purpose of facilitating an understanding of the disclosed embodiments, a method for posture correction disclosed in the disclosed embodiments will be described in detail first.
The gesture detection method provided by the embodiment of the disclosure can be applied to a server or a terminal device supporting a display function. The server may be a local server or a cloud server, and the terminal device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a smart television, and the like, which is not limited in this disclosure.
Referring to fig. 1, a schematic flow chart of a method for posture correction provided by an embodiment of the present disclosure is shown, the method includes S101-S104, where:
s101, obtaining a user imitation image of a target user imitation target object posture in a target detection area.
And S102, determining the imitation posture information of the target user based on the user imitation image.
S103, determining bone posture matching information between the target user and the target object based on the simulated posture information and the posture information of the target object.
And S104, generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
In the method, the imitating posture information of the target user is determined through the acquired user imitating image; determining skeletal posture matching information between the target user and the target object based on the simulated posture information and the posture information of the target object; according to the bone posture matching information, a bone posture image of the target user is generated and displayed, wherein bones with unmatched bone postures and bones with matched bone postures are distinguished in the generated bone posture image, so that the user can check bones with unmatched postures, such as leg bone postures or hand bone postures, and the like, and further can correct the unmatched bone postures, so that the user can know a target object (such as a character sculpture) more carefully and accurately, the display effect of the target object is improved, and the interaction of the user and the displayed target object is realized.
For S101:
here, the target object may be a character sculpture displayed in an exhibition hall, a character in a drawing, or the like. During specific implementation, a target user can simulate the posture of the target object according to the displayed target object, and meanwhile, a user simulated image of the target user simulating the posture of the target object can be obtained through the camera equipment arranged in the target detection area. The displayed target object can be a solid figure sculpture and a figure in a solid picture displayed in the exhibition hall; the displayed target object can also be a person in an image displayed on display equipment of the exhibition hall, and the image displayed on the display equipment can be an image of a human sculpture, an image corresponding to a picture and the like; the target object presented may also be a virtual character or the like presented by an augmented reality device.
In an alternative embodiment, obtaining a user-simulated image of a target user-simulated target object pose located within a target detection region may include: user-mimicking images of the target user mimicking the pose of the target object are continuously acquired. And displaying the bone posture image of the target user according to the bone posture matching information, wherein the bone posture image comprises: and controlling display equipment corresponding to the target detection area to continuously display the bone posture image of the target user.
The image pickup device can continuously acquire user simulated images of the target user simulating the posture of the target object in real time, determine skeleton posture matching information corresponding to the frame of user simulated images according to each frame of user simulated images, further determine skeleton posture images of the target user corresponding to the frame of user simulated images according to the skeleton posture matching information corresponding to the frame of user simulated images, control display equipment corresponding to a target detection area, and continuously display the skeleton posture images of the target user; namely, the display device corresponding to the target detection area is controlled, and the bone posture image corresponding to each frame of the user simulation image is displayed in real time.
In the above embodiment, the user simulated images are continuously acquired, the bone posture images of the target user are continuously displayed on the display device corresponding to the target detection area, and the bone posture images of the target user are updated in real time, so that the target user can adjust bones with unmatched postures according to the bone posture images updated in real time, the simulated postures of the target user are consistent with the postures of the target object, and the display effect simulated by the user is improved.
For S102:
here, the mimic pose information of the target user and the bone pose image of the target user may be determined from the user mimic image.
In an alternative embodiment, determining the simulated pose information of the target user and the bone pose image of the target user based on the user simulated image may include:
firstly, determining key point information corresponding to each bone of a target user based on a user simulation image.
And secondly, determining simulated posture information of the target user based on the key point information corresponding to each skeleton of the target user.
Explaining the step one, the key point extraction can be performed on the user simulated image by using the key point extraction neural network, and the key point information corresponding to each bone of the target user is determined. Wherein, each skeleton can comprise bones corresponding to hands, heads, arms, legs, feet and the like; the number of key points corresponding to each bone and the positions of the key points can be set according to actual needs. For example, the number of key points of the head may be 1, and the position of the key point may be the position in the middle of the forehead.
Explaining the step two, after the key point information corresponding to each skeleton of the target user is determined, the simulated posture information of the target user can be determined according to the key point information corresponding to each skeleton of the target user. For example, feature information such as an angle between each bone and an adjacent bone and a height of the bone from the ground may be determined from the key point information of each bone, and the feature information corresponding to the bone may be determined as feature information under a plurality of posture features corresponding to the bone.
During specific implementation, the bone posture image of the target user can be obtained according to the key point information corresponding to each bone of the target user, namely, the key points can be connected according to the limb structure of the human body, so that the bone posture image of the target user is obtained. Referring to fig. 2, a schematic diagram of a skeleton posture image including key points of head skeleton, arm skeleton, body skeleton and leg skeleton is shown in the posture correction method. For example, the keypoint location of the head bone may be at a location in the middle of the vertex; the keypoints of the arm bones may include keypoints of the left arm bones and keypoints of the right arm bones, the keypoints of the left arm bones and the right arm bones may be symmetric, and the keypoint locations of the left arm/right arm bones may be located at: the joint position between the front arm and the rear arm, the joint position between the front arm and the wrist, and the joint position between the rear arm and the shoulder; multiple keypoint locations of the body skeleton may be located: the joint position between the neck and the body and the middle position of the abdomen; the keypoints of the leg bones may include a keypoint of the left leg bone and a keypoint of the right leg bone, the keypoint of the left leg bone and the keypoint of the right leg bone may be symmetric, and the keypoint locations of the left/right leg bones may be located at: the joint position between the leg and the crotch bone (the key point positions corresponding to the left leg and the right leg are the same), the joint position between the thigh and the shank and the joint position between the shank and the ankle.
For S103:
here, the bone posture matching information includes posture matching information corresponding to each bone, and the posture matching information may include a matching degree, for example, the posture matching information of the bone a may be a matching degree of 80%; that is, the bone pose matching information may include a matching degree corresponding to each bone.
Prior to acquiring a user-mimicking image of a target user-mimicking target object pose located within a target detection region, the method further includes:
establishing a virtual three-dimensional model corresponding to the target object, wherein the posture of the virtual three-dimensional model is the same as that of the target object, calculating various posture characteristic information corresponding to each bone in the virtual three-dimensional model, and storing the various posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object.
Here, a virtual three-dimensional model corresponding to the target object may be established, for example, a virtual three-dimensional model of a displayed sculpture may be established, wherein the posture of the virtual three-dimensional model is the same as that of the target object. Meanwhile, the bone posture image of the target object can be determined based on the established virtual three-dimensional model, and the bone posture image of the target object can be displayed on the display device. And calculating various posture characteristic information corresponding to each bone in the virtual three-dimensional model, storing the various posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object, and providing data support for subsequently determining the bone posture matching information between the target user and the target object.
Illustratively, the plurality of pose features may include at least one of the following features: angular features, pitch features, height features, etc. The angular characteristics may include angles between different bones, for example, the angles between different bones may be an angle between a head bone and a shoulder bone, an angle between a forearm bone and a hind arm bone, an angle between a hind arm bone and a torso bone, an angle between a thigh bone and a calf bone, an angle between a foot bone and the ground, etc.; the spacing features may include spacing between different bones, for example, the spacing between different bones may be a spacing between a hand bone and a torso bone, a spacing between a head bone and a shoulder bone, a spacing between a hand bone and a leg bone, or the like; the height characteristics may include the height of each bone, for example, the height of a bone may be the height of the head bone from the ground, the height of the left hand bone from the ground, the height of the right hand bone from the ground, the height of the left shoulder bone from the ground, the height of the right shoulder bone from the ground, and the like. Wherein, various attitude information can be set according to actual needs, and is only an exemplary illustration here.
In specific implementation, various posture characteristic information corresponding to each bone can be determined through the key point information corresponding to each bone.
In an alternative embodiment, determining skeletal pose matching information between the target user and the target object based on the simulated pose information and the pose information of the target object may include:
and aiming at each bone shown in the bone posture image, determining posture matching information corresponding to the bone based on the feature information under the various posture features corresponding to the bone in the simulated posture information and the prestored various posture feature information of the bone in the virtual three-dimensional model.
And secondly, determining the posture matching information corresponding to each bone as the bone posture matching information between the target user and the target object.
Here, feature information under a plurality of posture features corresponding to each bone may be determined based on the key point information of each bone, and feature information under a plurality of posture features corresponding to each bone in the bone posture image may be obtained. And further, for each bone shown in the bone posture image, the posture matching information corresponding to the bone can be determined based on the feature information under the multiple posture features corresponding to the bone in the simulated posture information and the multiple posture feature information of the bone in the pre-stored virtual three-dimensional model.
For example, cosine similarity between feature information under various posture features corresponding to the bone in the simulated posture information and various posture feature information of the bone in the virtual three-dimensional model stored in advance can be calculated, and the matching degree of the bone is determined based on the cosine similarity obtained through calculation, namely the posture matching information corresponding to the bone is determined; or feature information under various posture features corresponding to the skeleton in the simulated posture information and various posture feature information of the skeleton in the virtual three-dimensional model stored in advance can be input into the matching degree determination neural network, so that posture matching information corresponding to the skeleton can be obtained. Further, the obtained posture matching information corresponding to each bone is determined as the bone posture matching information between the target user and the target object. For example, the matching degree of each bone shown in the bone posture image is obtained.
In an optional embodiment, for each bone shown in the bone pose image, determining pose matching information corresponding to the bone based on feature information under multiple pose features simulating the bone corresponding to the pose information and multiple pose feature information of the bone in a pre-stored virtual three-dimensional model may include: and determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the various posture characteristics of the skeleton, and the characteristic information under the various posture characteristics corresponding to the skeleton in the simulated posture information and the various posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
In specific implementation, a corresponding weight may be set for each of the plurality of posture features, and if the plurality of posture features include an angle feature, a distance feature, and a height feature, a weight may be set for each of the angle feature, the distance feature, and the height feature according to an actual situation, where a sum of the weights corresponding to the angle feature, the distance feature, and the height feature may be 1.
And determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the various posture characteristics of the skeleton, and the characteristic information under the various posture characteristics corresponding to the skeleton in the simulated posture information and the various posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
For example, for each posture feature of the plurality of posture features of each bone, feature information similarity between feature information corresponding to the posture feature in the simulated posture information and feature information corresponding to the posture feature in the virtual three-dimensional model can be calculated, and feature information similarity corresponding to each posture feature can be obtained; and obtaining the posture matching information of the skeleton of the target user based on the feature information similarity corresponding to each posture feature and the weight corresponding to each posture feature.
In the above embodiment, a corresponding weight may be set for each of the plurality of posture features, a weight of a feature in which the posture feature is more important may be set to be larger, a weight of a feature in which the posture feature is less important may be set to be smaller, and the posture matching information corresponding to the bone may be determined more accurately based on the weights corresponding to the plurality of posture features of the bone, and the feature information under the plurality of posture features corresponding to the bone in the simulated posture information and the plurality of posture feature information corresponding to the bone in the virtual three-dimensional model.
For S104:
after the posture matching information corresponding to each bone is obtained, a bone posture image corresponding to the user imitation image of the target user can be generated according to the posture matching information of each bone. Wherein, in the bone posture image, bones with unmatched bone postures and bones with matched bone postures have different identifications. And meanwhile, the simulation similarity corresponding to the bone posture image can be determined according to the posture matching information corresponding to each bone displayed in the bone posture image. For example, if the posture matching information corresponding to each bone is the matching degree, the matching degree corresponding to each bone can be averaged to obtain the simulation similarity corresponding to the bone posture image; or, the matching degrees corresponding to each bone can be weighted and averaged to obtain the simulation similarity corresponding to the bone posture image.
In an alternative embodiment, generating and presenting a bone pose image of the target user according to the bone pose matching information may include: and in the generated and displayed bone posture image of the target user, different colors are adopted to respectively display bones with unmatched bone postures and bones with matched bone postures.
For example, in the bone pose image, the color of the bone whose bone pose is not matched may be set to red, and the color of the bone whose bone pose is matched may be set to white. Alternatively, when the bone is represented by using lines, the bone with unmatched bone posture and the bone with matched bone posture can be respectively represented by different line types, for example, the bone with matched bone posture can be represented by using a solid line, and the bone with unmatched bone posture can be represented by using a dotted line.
In an alternative embodiment, before acquiring the user-simulated image of the target user-simulated target object pose located in the target detection area, the method further comprises:
responding to preset trigger operation of a target user, and controlling display equipment corresponding to a target detection area to display at least one stored image of a target object; and/or the presence of a gas in the gas,
and under the condition that a plurality of displayed target objects are provided, responding to an object selection operation triggered by a target user, and controlling the display equipment corresponding to the target detection area to display the image of the target object selected by the target user so that the target user can simulate the target object displayed by the display equipment.
Here, the preset trigger operation may be any operation set in advance, for example, clicking a preset button or the like. And responding to a preset trigger operation of the target user, controlling the display device of the target detection area object to display the stored image of the at least one target object, so that the target user can view the at least one target object capable of being imitated, and further selecting a target object gesture to be imitated from the displayed image of the at least one target object.
In a specific implementation, when a plurality of target objects are provided, the images of each target object may be sequentially displayed on the display device corresponding to the target detection area according to a set order, or the images of all the target objects may be displayed on the display device corresponding to the target detection area.
In view of the variety and flexibility of the simulation, multiple target objects may be stored in advance here so that different users may simulate different target object poses as desired. In specific implementation, the target user can select a target object to be simulated from the displayed multiple target objects, that is, the target user triggers an object selection operation, and controls the display device corresponding to the target detection area to display an image of the target object selected by the target user in response to the object selection operation triggered by the target user, so that the target user can simulate the posture of the selected target object conveniently according to the displayed image of the target object.
In an alternative embodiment, obtaining a user-simulated image of a target user-simulated target object pose located within a target detection region may include: and acquiring a user simulation image of a target user simulating the posture of the target object in the target detection area after receiving a simulation request triggered by the target user or detecting that the target user exists in the preset area.
Here, when the target user wants to simulate the posture of the target object, the simulation request may be triggered, for example, the simulation request may be triggered by clicking a simulation button provided in the target detection area; or, a simulation button arranged on the communication equipment can be clicked to trigger a simulation request; alternatively, a preset impersonation instruction may be issued, for example, the user may say "i want to impersonate the target object a", trigger an impersonation request, and the like. The form of the triggered emulation request is various, and is only an exemplary illustration here. After receiving a target user triggered mimic request, a user mimic image of a target user mimic target object pose located within a target detection region is obtained.
Or, a preset area may be set in advance, and after the target user is detected to exist in the preset area, a user imitation image of the target user imitating the posture of the target object in the target detection area is acquired.
Referring to fig. 3, an interface diagram of a display device is shown, in fig. 3, a bone pose image, an image of a target object, a simulation similarity, and a simulation mark (i.e. a circle in fig. 3) are included, wherein a ratio of an area of a black region in the simulation mark to an area of an entire circle region matches the simulation similarity, that is, the simulation similarity is as follows: and 50%, the area of the black area is half of the area of the whole circular area, and the area of the black area in the simulation mark can be adjusted in real time according to the simulation similarity.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides a posture correction device, and as shown in fig. 4, an architecture diagram of the posture correction device provided in the embodiment of the present disclosure includes an obtaining module 401, a first determining module 402, a second determining module 403, a generating module 404, a calculating module 405, a first displaying module 406, and a second displaying module 407, specifically:
an obtaining module 401, configured to obtain a user-imitated image of a target user imitating a posture of a target object, where the target user is located in a target detection region;
a first determining module 402 for determining mimicking pose information of the target user based on the user mimicking image;
a second determination module 403 for determining bone pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object;
a generating module 404, configured to generate and display a bone posture image of a target user according to the bone posture matching information, where, in the bone posture image of the target user, a bone with a unmatched bone posture is distinguished from a bone with a matched bone posture.
In a possible implementation, the generating module 404, when generating and presenting the bone pose image of the target user according to the bone pose matching information, is configured to:
and respectively displaying bones with unmatched bone postures and bones with matched bone postures in different colors in the generated and displayed bone posture image of the target user.
In one possible embodiment, the acquiring module 401, when acquiring the user-simulated image of the target user-simulated target object gesture located in the target detection area, is configured to:
continuously acquiring user imitation images of the target user imitating the posture of the target object;
the generating module 404, when generating and displaying the bone posture image of the target user according to the bone posture matching information, is configured to:
and controlling a display device corresponding to the target detection area, and continuously displaying the generated bone posture image of the target user.
In one possible implementation, the first determining module 402, when determining the mimicking posture information of the target user based on the user mimicking image, is configured to:
determining key point information corresponding to each bone of the target user based on the user simulated image;
and determining the simulated posture information of the target user based on the key point information corresponding to each skeleton of the target user.
In a possible embodiment, the apparatus further comprises:
a calculation module 405, configured to establish a virtual three-dimensional model corresponding to the target object, where a posture of the virtual three-dimensional model is the same as a posture of the target object, calculate multiple kinds of posture characteristic information corresponding to each bone in the virtual three-dimensional model, and store the multiple kinds of posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object;
a second determination module 403, when determining bone pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object, for:
for each bone shown in the bone posture image, determining posture matching information corresponding to the bone based on feature information under various posture features corresponding to the bone in the simulated posture information and various posture feature information of the bone in the virtual three-dimensional model stored in advance;
and determining the posture matching information corresponding to each bone as the bone posture matching information between the target user and the target object.
In one possible embodiment, the second determining module 403, when determining, for each bone shown in the bone pose image, pose matching information corresponding to the bone based on feature information under a plurality of pose features corresponding to the bone in the simulated pose information and a plurality of pose feature information of the bone in the virtual three-dimensional model stored in advance, is configured to:
determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the multiple posture characteristics of the skeleton, and the characteristic information under the multiple posture characteristics corresponding to the skeleton in the simulated posture information and the multiple posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
In one possible embodiment, before acquiring the user-simulated image of the target user-simulated target object pose located in the target detection area, the method includes:
a first display module 406, configured to respond to a preset trigger operation of a target user, and control a display device corresponding to the target detection area to display a stored image of at least one target object; and/or the presence of a gas in the gas,
the second display module 407 is configured to, in a case that a plurality of displayed target objects are present, respond to an object selection operation triggered by a target user, and control the display device corresponding to the target detection area to display an image of the target object selected by the target user, so that the target user can simulate the target object displayed by the display device.
In one possible implementation, the obtaining module 401, when obtaining the user-imitated image of the pose of the target user imitated target object located in the target detection area, is configured to:
and acquiring a user simulation image of a target user simulating the posture of the target object in the target detection area after receiving a simulation request triggered by the target user or detecting that the target user exists in the preset area.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the electronic device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
acquiring a user imitation image of a target user imitation target object posture in a target detection area;
determining impersonation pose information of the target user based on the user impersonation image;
determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object;
and generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
Furthermore, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the posture correction method described in the above method embodiments.
The computer program product of the method for correcting a posture provided in the embodiments of the present disclosure includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the method for correcting a posture described in the above method embodiments, which may be specifically referred to the above method embodiments and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of posture correction, comprising:
acquiring a user imitation image of a target user imitation target object posture in a target detection area;
determining impersonation pose information of the target user based on the user impersonation image;
determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object;
and generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
2. The method of claim 1, wherein generating and presenting a bone pose image of a target user from the bone pose matching information comprises:
and respectively displaying bones with unmatched bone postures and bones with matched bone postures in different colors in the generated and displayed bone posture image of the target user.
3. The method of claim 1, wherein obtaining a user-mimicking image of a target user-mimicking target object pose located within a target detection region comprises:
continuously acquiring user imitation images of the target user imitating the posture of the target object;
generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein the bone posture image comprises:
and controlling a display device corresponding to the target detection area, and continuously displaying the generated bone posture image of the target user.
4. The method of claim 1, wherein determining the mimicking pose information of the target user based on the user mimicking image comprises:
determining key point information corresponding to each bone of the target user based on the user simulated image;
and determining the simulated posture information of the target user based on the key point information corresponding to each skeleton of the target user.
5. The method of claim 1, further comprising:
establishing a virtual three-dimensional model corresponding to the target object, wherein the posture of the virtual three-dimensional model is the same as that of the target object, calculating various posture characteristic information corresponding to each bone in the virtual three-dimensional model, and storing the various posture characteristic information corresponding to each bone in the virtual three-dimensional model as the posture information of the target object;
determining skeletal pose matching information between the target user and the target object based on the mimicking pose information and the pose information of the target object, comprising:
for each bone shown in the bone posture image, determining posture matching information corresponding to the bone based on feature information under various posture features corresponding to the bone in the simulated posture information and various posture feature information of the bone in the virtual three-dimensional model stored in advance;
and determining the posture matching information corresponding to each bone as the bone posture matching information between the target user and the target object.
6. The method of claim 5, wherein for each bone shown in the bone pose image, determining pose matching information corresponding to the bone based on feature information under a plurality of pose features corresponding to the bone in the simulated pose information and a plurality of pose feature information of the bone in the virtual three-dimensional model stored in advance comprises:
determining the posture matching information corresponding to the skeleton based on the weights respectively corresponding to the multiple posture characteristics of the skeleton, and the characteristic information under the multiple posture characteristics corresponding to the skeleton in the simulated posture information and the multiple posture characteristic information corresponding to the skeleton in the virtual three-dimensional model.
7. The method of any of claims 1-6, prior to obtaining the user-mimicking image of a target user-mimicking target object pose located within the target detection region, comprising:
responding to preset trigger operation of a target user, and controlling display equipment corresponding to the target detection area to display at least one stored image of the target object; and/or the presence of a gas in the gas,
and under the condition that a plurality of displayed target objects are available, responding to an object selection operation triggered by a target user, and controlling display equipment corresponding to the target detection area to display the image of the target object selected by the target user so that the target user can simulate the target object displayed by the display equipment.
8. The method of any one of claims 1-7, wherein obtaining a user-mimicking image of a target user-mimicking a pose of a target object located within a target detection region includes:
and acquiring a user simulation image of a target user simulating the posture of the target object in the target detection area after receiving a simulation request triggered by the target user or detecting that the target user exists in the preset area.
9. An apparatus for posture improvement, comprising:
the acquisition module is used for acquiring a user imitation image of a target user imitation target object posture in a target detection area;
a first determination module to determine mimicking pose information of the target user based on the user mimicking image;
a second determination module to determine skeletal pose matching information between the target user and the target object based on the mimicking pose information and pose information of the target object;
and the generating module is used for generating and displaying a bone posture image of the target user according to the bone posture matching information, wherein bones with unmatched bone postures and bones with matched bone postures in the bone posture image of the target user are distinguished.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of posture correction according to any of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method of posture correction according to any one of claims 1 to 8.
CN202010501035.9A 2020-06-04 2020-06-04 Posture correction method and device, electronic equipment and storage medium Pending CN111639612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010501035.9A CN111639612A (en) 2020-06-04 2020-06-04 Posture correction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010501035.9A CN111639612A (en) 2020-06-04 2020-06-04 Posture correction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111639612A true CN111639612A (en) 2020-09-08

Family

ID=72332070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010501035.9A Pending CN111639612A (en) 2020-06-04 2020-06-04 Posture correction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111639612A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308977A (en) * 2020-10-29 2021-02-02 字节跳动有限公司 Video processing method, video processing apparatus, and storage medium
CN114035683A (en) * 2021-11-08 2022-02-11 百度在线网络技术(北京)有限公司 User capturing method, device, equipment, storage medium and computer program product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
JP2013037454A (en) * 2011-08-05 2013-02-21 Ikutoku Gakuen Posture determination method, program, device, and system
CN106022213A (en) * 2016-05-04 2016-10-12 北方工业大学 Human body motion recognition method based on three-dimensional bone information
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
CN106650687A (en) * 2016-12-30 2017-05-10 山东大学 Posture correction method based on depth information and skeleton information
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108664877A (en) * 2018-03-09 2018-10-16 北京理工大学 A kind of dynamic gesture identification method based on range data
CN108815848A (en) * 2018-05-31 2018-11-16 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN109064487A (en) * 2018-07-02 2018-12-21 中北大学 A kind of human posture's comparative approach based on the tracking of Kinect bone node location
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human
CN110245623A (en) * 2019-06-18 2019-09-17 重庆大学 A kind of real time human movement posture correcting method and system
CN110675474A (en) * 2019-08-16 2020-01-10 咪咕动漫有限公司 Virtual character model learning method, electronic device and readable storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
JP2013037454A (en) * 2011-08-05 2013-02-21 Ikutoku Gakuen Posture determination method, program, device, and system
CN106022213A (en) * 2016-05-04 2016-10-12 北方工业大学 Human body motion recognition method based on three-dimensional bone information
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
CN106650687A (en) * 2016-12-30 2017-05-10 山东大学 Posture correction method based on depth information and skeleton information
CN108664877A (en) * 2018-03-09 2018-10-16 北京理工大学 A kind of dynamic gesture identification method based on range data
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108815848A (en) * 2018-05-31 2018-11-16 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN109064487A (en) * 2018-07-02 2018-12-21 中北大学 A kind of human posture's comparative approach based on the tracking of Kinect bone node location
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human
CN110245623A (en) * 2019-06-18 2019-09-17 重庆大学 A kind of real time human movement posture correcting method and system
CN110675474A (en) * 2019-08-16 2020-01-10 咪咕动漫有限公司 Virtual character model learning method, electronic device and readable storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYPER_ET: "Kinect V2开发(7)测量骨骼点高度以及骨骼角度" *
刘李正: "基于Kinect的人体跌倒检测算法研究" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308977A (en) * 2020-10-29 2021-02-02 字节跳动有限公司 Video processing method, video processing apparatus, and storage medium
CN112308977B (en) * 2020-10-29 2024-04-16 字节跳动有限公司 Video processing method, video processing device, and storage medium
CN114035683A (en) * 2021-11-08 2022-02-11 百度在线网络技术(北京)有限公司 User capturing method, device, equipment, storage medium and computer program product
CN114035683B (en) * 2021-11-08 2024-03-29 百度在线网络技术(北京)有限公司 User capturing method, apparatus, device, storage medium and computer program product

Similar Documents

Publication Publication Date Title
US11468612B2 (en) Controlling display of a model based on captured images and determined information
KR101519775B1 (en) Method and apparatus for generating animation based on object motion
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN102439603B (en) Simple techniques for three-dimensional modeling
JP2001504605A (en) Method for tracking and displaying a user's location and orientation in space, method for presenting a virtual environment to a user, and systems for implementing these methods
US20210349529A1 (en) Avatar tracking and rendering in virtual reality
WO2015054426A1 (en) Single-camera motion capture system
US11156830B2 (en) Co-located pose estimation in a shared artificial reality environment
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
CN111639612A (en) Posture correction method and device, electronic equipment and storage medium
JP2015186531A (en) Action information processing device and program
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
WO2020147797A1 (en) Image processing method and apparatus, image device, and storage medium
CN112973110A (en) Cloud game control method and device, network television and computer readable storage medium
CN111639615A (en) Trigger control method and device for virtual building
CN111626247A (en) Attitude detection method and apparatus, electronic device and storage medium
WO2020147794A1 (en) Image processing method and apparatus, image device and storage medium
CN111625102A (en) Building display method and device
RU2106695C1 (en) Method for representation of virtual space for user and device which implements said method
US11360549B2 (en) Augmented reality doll
JP6899105B1 (en) Operation display device, operation display method and operation display program
CN117101138A (en) Virtual character control method, device, electronic equipment and storage medium
CN115243106A (en) Intelligent visual interaction method and device
CN113705483A (en) Icon generation method, device, equipment and storage medium
CN115888113A (en) Information processing apparatus, image processing apparatus, computer program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination