CN110225400A - A kind of motion capture method, device, mobile terminal and storage medium - Google Patents
A kind of motion capture method, device, mobile terminal and storage medium Download PDFInfo
- Publication number
- CN110225400A CN110225400A CN201910611391.3A CN201910611391A CN110225400A CN 110225400 A CN110225400 A CN 110225400A CN 201910611391 A CN201910611391 A CN 201910611391A CN 110225400 A CN110225400 A CN 110225400A
- Authority
- CN
- China
- Prior art keywords
- video
- human body
- key point
- body key
- target user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 210000000707 wrist Anatomy 0.000 claims description 84
- 238000012545 processing Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 230000009471 action Effects 0.000 abstract description 13
- 210000003739 neck Anatomy 0.000 description 94
- 230000006870 function Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000000149 penetrating effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present disclosure discloses a kind of motion capture method, device, mobile terminal and storage medium.Wherein, method comprises determining that the human body key point in the video frame of target user's live video;According to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user, determine spatial position of the human body key point in video capture device coordinate system in the video frame.The embodiment of the present disclosure can be according to the parameter of video capture equipment and the full-length of human body key point line, spatial position of the human body key point in video capture device coordinate system in the video frame of target user's live video is determined, to accurately identify the human action in video frame.
Description
Technical field
The embodiment of the present disclosure is related to technical field of image processing more particularly to a kind of motion capture method, device, movement eventually
End and storage medium.
Background technique
The main broadcaster of net cast can choose and be broadcast live using virtual portrait during live streaming.Visual human's object angle
Color can be various animation cartoon characters.Virtual portrait role is generated according to the virtual portrait of the main broadcaster of net cast selection.?
During live streaming, the movement of main broadcaster is obtained according to each frame video image, realizes motion capture, then controls visual human's object angle
Color simulates the movement of main broadcaster, obtains virtual character video.Virtual character video is added in real scene video at the scene and obtains mixing field
Mixing scene video is uploaded to live streaming platform and is broadcast live by scape video.
In the prior art, action recognition generally is carried out to each frame video image respectively, determines the movement in video image
Then type controls the movement that virtual portrait role simulates main broadcaster according to type of action.For example, determining the movement in video image
Type is lift left hand, then controls virtual portrait role and also lift left hand.
Drawback of the prior art is that only determining the type of action in video image, the accuracy of motion capture is low.
Summary of the invention
The disclosure provides a kind of motion capture method, device, mobile terminal and storage medium, to realize accurately identification view
Human action in frequency image.
In a first aspect, the embodiment of the present disclosure provides a kind of motion capture method, comprising:
Determine the human body key point in the video frame of target user's live video;
According to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user,
Determine spatial position of the human body key point in video capture device coordinate system in video frame.
Second aspect, the embodiment of the present disclosure additionally provide a kind of motion capture device, comprising:
Key point determining module, the human body key point in video frame for determining target user's live video;
Spatial position determining module, for the parameter according to video capture equipment, and human body corresponding with target user
The full-length of key point line determines space bit of the human body key point in video capture device coordinate system in video frame
It sets.
The third aspect, the embodiment of the present disclosure additionally provide a kind of mobile terminal, comprising:
One or more processing units;
Storage device, for storing one or more programs;
When one or more programs are executed by one or more processing units, so that one or more processing units are realized such as
Motion capture method described in the embodiment of the present disclosure.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of computer readable storage medium, are stored thereon with computer
Program realizes the motion capture method as described in the embodiment of the present disclosure when computer program is executed by processor.
The embodiment of the present disclosure passes through the human body key point in the video frame for determining target user's live video, then according to view
The parameter of frequency capture apparatus, and the full-length of human body key point line corresponding with target user, determine in video frame
Spatial position of the human body key point in video capture device coordinate system can be closed according to the parameter and human body of video capture equipment
The full-length of key point line determines that the human body key point in the video frame of target user's live video is sat in video capture equipment
Spatial position in mark system, to accurately identify the human action in video frame.
Detailed description of the invention
In conjunction with attached drawing and refer to following specific embodiments, the above and other feature, advantage of each embodiment of the disclosure and
Aspect will be apparent.In attached drawing, the same or similar appended drawing reference indicates the same or similar element.It should manage
Solution attached drawing is schematically that original part and element are not necessarily drawn to scale.
Fig. 1 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides;
Fig. 2 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides;
Fig. 3 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides;
Fig. 4 is a kind of structural schematic diagram for motion capture device that the embodiment of the present disclosure provides;
Fig. 5 is a kind of structural schematic diagram for mobile terminal that the embodiment of the present disclosure provides.
Specific embodiment
Embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the certain of the disclosure in attached drawing
Embodiment, it should be understood that, the disclosure can be realized by various forms, and should not be construed as being limited to this
In the embodiment that illustrates, providing these embodiments on the contrary is in order to more thorough and be fully understood by the disclosure.It should be understood that
It is that being given for example only property of the accompanying drawings and embodiments effect of the disclosure is not intended to limit the protection scope of the disclosure.
It should be appreciated that each step recorded in disclosed method embodiment can execute in a different order,
And/or parallel execution.In addition, method implementation may include additional step and/or omit the step of execution is shown.This public affairs
The range opened is not limited in this respect.
Terms used herein " comprising " and its deformation are that opening includes, i.e., " including but not limited to ".Term "based"
It is " being based at least partially on ".Term " one embodiment " expression " at least one embodiment ";Term " another embodiment " indicates
" at least one other embodiment ";Term " some embodiments " expression " at least some embodiments ".The correlation of other terms is fixed
Justice provides in will be described below.
It is noted that the concepts such as " first " that refers in the disclosure, " second " are only used for different devices, module or list
Member distinguishes, and is not intended to limit the sequence or relation of interdependence of function performed by these devices, module or unit.
It is noted that referred in the disclosure "one", the modification of " multiple " be schematically and not restrictive this field
It will be appreciated by the skilled person that being otherwise construed as " one or more " unless clearly indicate otherwise in context.
The being merely to illustrate property of title of the message or information that are interacted between multiple devices in disclosure embodiment
Purpose, and be not used to limit the range of these message or information.
Fig. 1 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides.The present embodiment is applicable to identify
The case where human action in video frame, this method can be executed by motion capture device, the device can using software and/
Or the mode of hardware is realized, which can be configured in mobile terminal.As shown in Figure 1, this method may include steps of:
Step 101 determines human body key point in the video frame of target user's live video.
Wherein, video frame is the video image in target user's live video.During target user is broadcast live,
Target user is shot by the video capture equipment (for example, camera) in mobile terminal, obtains each frame video image.
Optionally, it determines the human body key point in the video frame of target user's live video, may include: to obtain shooting
After the video frame arrived, image recognition is carried out to video frame, identifies the human body key point for including in video frame.Human body key point can be with
It include: head, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist.Image is established as origin using the picture centre of video frame
Coordinate system obtains coordinate position of whole human body key points in image coordinate system.By each human body key point in image coordinate system
In coordinate position be determined as the picture position of each human body key point.
Step 102, according to the parameter of video capture equipment, and the mark of human body key point line corresponding with target user
Standard length determines spatial position of the human body key point in video capture device coordinate system in video frame.
Wherein, the full-length of human body key point line corresponding with target user is the people of the target user of measured in advance
The physical length of body key point line.Optionally, human body key point line include: line between head and neck, neck with
The company between the line between the line between line, neck and right shoulder, left shoulder and left elbow, right shoulder and right elbow between left shoulder
The line between line and right elbow and right wrist between line, left elbow and left wrist.
In a specific embodiment, the reference video image of target user is obtained;Wherein, the mesh in reference video image
The movement of user is marked as preset standard movement, preset standard movement is whole human body key points in the dynamic of same vertical plane
Make.I.e. the distance of vertical plane is identical where each human body key point of target user to video capture equipment.Then basis
The parameter of reference video image and video capture equipment determines the standard of human body key point line corresponding with target user
Length.
Optionally, according to the parameter of video capture equipment, and the mark of human body key point line corresponding with target user
Standard length determines spatial position of the human body key point in video capture device coordinate system in video frame, may include: basis
The picture position of human body key point and the parameter of video capture equipment, determine using video capture device location as starting point and
Ray corresponding with the picture position of human body key point;According to ray, and human body key point line corresponding with target user
Full-length, obtain spatial position of the human body key point in video capture device coordinate system in video frame.
Wherein, video capture device coordinate system is established according to the parameter of video capture equipment.By video capture device coordinate
The origin of system is denoted as video capture device location.Video capture device coordinate system is with the focusing center of video capture equipment for original
Point, the three-dimensional cartesian coordinate system established using the optical axis of video capture equipment as Z axis.The origin of video capture device coordinate system is view
The optical center of frequency capture apparatus.The x-axis of video capture device coordinate system and the X of y-axis and image coordinate system, Y-axis are parallel.Video capture
The z-axis of device coordinate system is video capture equipment optical axis, it and imaging plane perpendicular.Image coordinate ties up on imaging plane.Depending on
The optical axis of frequency capture apparatus and the intersection point of imaging plane, the as origin of image coordinate system, image coordinate system are that two-dimentional right angle is sat
Mark system.The distance between origin and the origin of image coordinate system of video capture device coordinate system are the coke of video capture equipment
Away from.
In video capture device coordinate system, the picture position to each human body key point and video capture device location respectively
It is attached, obtains penetrating for the picture position using video capture device location as starting point and Jing Guo corresponding human body key point
Line, i.e., using video capture device location as starting point and ray corresponding with the picture position of human body key point.Each human body is crucial
The picture position of point is each human body key point and the line of video capture device location and the intersection point of imaging plane of target user.
Hence, it can be determined that each human body key point of target user as starting point and passes through corresponding in using video capture device location
On the ray of the picture position of human body key point.
Then according to ray, and the full-length of human body key point line corresponding with target user, obtain video frame
In spatial position of the human body key point in video capture device coordinate system.For example, being according to video capture device location
Starting point and the respectively full-length of the line between three rays corresponding with neck, left shoulder, right shoulder, neck and left shoulder, with
And the full-length of the line between neck and right shoulder, determine the sky of neck, left shoulder, right shoulder in video capture device coordinate system
Between position.The neck of human body, left shoulder, right shoulder are usually located on same straight line.According to using video capture device location as starting point
And the full-length and neck of the line between three rays corresponding with neck, left shoulder, right shoulder, neck and left shoulder respectively
The full-length of line between right shoulder, determines straight line.The straight line with using video capture device location as starting point and
Three ray intersections corresponding with neck, left shoulder, right shoulder respectively, and the intersection point of straight line ray corresponding with neck and this is straight
The length of line between the intersection point of line ray corresponding with left shoulder is equal to the full-length of the line between neck and left shoulder, should
The length of line between the intersection point of the intersection point and straight line ray corresponding with right shoulder of straight line and the corresponding ray of neck is equal to
The full-length of line between neck and right shoulder.The intersection point of straight line ray corresponding with neck is the neck of target user
Son.The coordinate position of the intersection point of straight line ray corresponding with neck is the neck of target user in video capture device coordinate
Spatial position in system.The intersection point of straight line ray corresponding with left shoulder is the left shoulder of target user.The straight line and left shoulder pair
The coordinate position of the intersection point for the ray answered is spatial position of the left shoulder of target user in video capture device coordinate system.It should
The intersection point of straight line ray corresponding with right shoulder is the left shoulder of target user.The seat of the intersection point of straight line ray corresponding with right shoulder
Cursor position is spatial position of the right shoulder of target user in video capture device coordinate system.
The technical solution of the present embodiment, the human body key point in video frame by determining target user's live video, so
Afterwards according to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user, determine view
Spatial position of the human body key point in video capture device coordinate system in frequency frame, can be according to the parameter of video capture equipment
With the full-length of human body key point line, determine that the human body key point in the video frame of target user's live video is clapped in video
The spatial position in device coordinate system is taken the photograph, to accurately identify the human action in video frame.
Fig. 2 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides.The present embodiment can be with above-mentioned one
Each optinal plan combines in a or multiple embodiments, in the present embodiment, in the video for determining target user's live video
Before human body key point in frame, further includes: obtain the reference video image of target user;Wherein, in reference video image
The movement of target user is preset standard movement, and preset standard movement is whole human body key points in the dynamic of same vertical plane
Make;According to reference video image and the parameter of video capture equipment, human body key point line corresponding with target user is determined
Full-length.
As shown in Fig. 2, this method may include steps of:
Step 201, the reference video image for obtaining target user;Wherein, the target user's in reference video image is dynamic
It is acted as preset standard, preset standard movement is whole human body key points in the movement of same vertical plane.
Wherein, reference video image is a frame video image.The movement of target user in reference video image is default
Standard operation, preset standard movement is whole human body key points in the movement of same vertical plane.That is target user each one
The distance of vertical plane is identical where body key point to video capture equipment.
For example, human body key point includes: head, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist.Preset standard
Movement is the movement that head, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist and right wrist are located at same vertical plane.
Optionally, during video capture, to target user's outputting standard action prompt information.Standard operation prompt letter
Breath is for prompting user to show preset standard movement.Then it is shot, is obtained by the video capture equipment in mobile terminal
Reference video image.
Step 202, according to reference video image and the parameter of video capture equipment, determination is corresponding with target user
The full-length of human body key point line.
Optionally, according to reference video image and the parameter of video capture equipment, people corresponding with target user is determined
The full-length of body key point line may include: to identify to the human body key point in reference video image, determine human body
The picture position of key point;According to the picture position of human body key point and the parameter of video capture equipment, determination is clapped with video
Taking the photograph device location is starting point and ray corresponding with the picture position of human body key point;Reference video image is identified,
Obtain the head image of target user;According to the corresponding relationship of preset head image and depth, the determining head with target user
The depth of portion's images match;According to ray, and the matched depth of head image with target user, determine that human body key point exists
Spatial position in video capture device coordinate system;According to spatial position, determine that human body key point corresponding with target user connects
The full-length of line.
After the reference video image for obtaining target user, image recognition is carried out to reference video image, identification is with reference to view
The human body key point for including in frequency image.Human body key point may include: head, neck, left shoulder, right shoulder, left elbow, right elbow, a left side
Wrist, right wrist.Image coordinate system is established by origin of the picture centre of reference video image, obtains whole human body key points in image
Coordinate position in coordinate system.Coordinate position of each human body key point in image coordinate system is determined as each human body key point
Picture position.
Video capture device coordinate system is established according to the parameter of video capture equipment.In video capture device coordinate system,
The picture position and video capture device location of each human body key point are attached respectively, obtained with video capture device location
The ray of picture position for starting point and Jing Guo corresponding human body key point, i.e., using video capture device location as starting point and
Ray corresponding with the picture position of human body key point.For example, obtaining using video capture device location as starting point and through excessive
The ray of the picture position in portion, using video capture device location as starting point and by neck picture position ray, with view
Frequency capture apparatus position be starting point and by left shoulder picture position ray, by starting point of video capture device location and
By the ray of the picture position of right shoulder, using video capture device location as starting point and penetrating by the picture position of left elbow
Line, by starting point of video capture device location and by the ray of the picture position of right elbow, with video capture device location be
Starting point and by left wrist picture position ray and using video capture device location be starting point and process right wrist figure
The ray of image position.
The picture position of each human body key point is the company of each the human body key point and video capture device location of target user
The intersection point of line and imaging plane.Hence, it can be determined that each human body key point of target user is in video capture device location
On the ray of picture position for starting point and Jing Guo corresponding human body key point.
Depth is the distance of vertical plane where object to video capture equipment.Human body head under acquisition different depth in advance
Portion's image.According to the human body head image under the different depth of acquisition, the corresponding relationship of head image and depth is established.Different
Depth corresponds to different head images.Corresponding storage is carried out to depth, and with the matched head image of depth.
Reference video image is identified, the head image of target user is obtained.Then according to the image of head image
Feature inquires the matched head image of head image with target user in whole head images of storage.Optionally, image
Feature includes the facial characteristics in the size and head image of head image.Facial characteristics in head image can be
The distributing position of face.Then by with depth corresponding to the matched head image of the head image of target user, be determined as with
The matched depth of the head image of target user.
It is vertical where being the head to video capture equipment of target user with the matched depth of the head image of target user
The distance of plane.Preset standard movement is whole human body key points in the movement of same vertical plane.Target user each one
The distance of vertical plane is identical where body key point to video capture equipment, and the head for being equal to target user is clapped to video
The distance of vertical plane where taking the photograph equipment.Thus, it is possible to determine vertical plane where each human body key point to video capture equipment
Distance.
On the ray of the picture position using video capture device location as starting point and Jing Guo corresponding human body key point,
The distance of vertical plane is equal to the three-dimensional with the matched depth of head image of target user where getting video capture equipment
Coordinate points.The three-dimensional coordinate point got is corresponding human body key point.The coordinate position of the three-dimensional coordinate point got is i.e.
For spatial position of the human body key point in video capture device coordinate system.Thus, it is possible to determine that each human body of target user closes
Spatial position of the key point in video capture device coordinate system.
Human body key point line is the line between two human body key points.To two human bodies in human body key point line
Spatial position of the key point in video capture device coordinate system carries out line, and the length of line is calculated according to two spaces position
The wire length being calculated, is determined as the full-length of human body key point line corresponding with target user by degree.
In a specific example, determine the head of target user, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, with
And spatial position of the right wrist in video capture device coordinate system.Human body key point line includes: the company between head and neck
Line, right shoulder and right elbow between line, neck between line, neck and left shoulder and the line between right shoulder, left shoulder and left elbow
Between line, the line between line and right elbow and right wrist between left elbow and left wrist.
The neck of the spatial position and target user of the head of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position on head and the spatial position of neck
The wire length being calculated is determined as the standard of the line between head corresponding with target user and neck by the length of line
Length.
The left shoulder of the spatial position and target user of the neck of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of neck and the spatial position of left shoulder
The wire length being calculated is determined as the standard of the line between neck corresponding with target user and left shoulder by the length of line
Length.
The right shoulder of the spatial position and target user of the neck of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of neck and the spatial position of right shoulder
The wire length being calculated is determined as the standard of the line between neck corresponding with target user and right shoulder by the length of line
Length.
The left elbow of the spatial position and target user of the left shoulder of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of left shoulder and the spatial position of left elbow
The wire length being calculated is determined as the standard of the line between left shoulder corresponding with target user and left elbow by the length of line
Length.
The right elbow of the spatial position and target user of the right shoulder of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of right shoulder and the spatial position of right elbow
The wire length being calculated is determined as the standard of the line between right shoulder corresponding with target user and right elbow by the length of line
Length.
The left wrist of the spatial position and target user of the left elbow of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of left elbow and the spatial position of left wrist
The wire length being calculated is determined as the standard of the line between left elbow corresponding with target user and left wrist by the length of line
Length.
The right wrist of the spatial position and target user of the right elbow of target user in video capture device coordinate system is being regarded
Spatial position in frequency capture apparatus coordinate system carries out line, is calculated and is connected according to the spatial position of right elbow and the spatial position of right wrist
The wire length being calculated is determined as the standard of the line between right elbow corresponding with target user and right wrist by the length of line
Length.
Step 203 determines human body key point in the video frame of target user's live video.
Step 204, according to the parameter of video capture equipment, and the mark of human body key point line corresponding with target user
Standard length determines spatial position of the human body key point in video capture device coordinate system in video frame.
The technical solution of the present embodiment, the mesh by obtaining the reference video image of target user, in reference video image
The movement of user is marked as preset standard movement, preset standard movement is whole human body key points in the dynamic of same vertical plane
Make, then according to reference video image and the parameter of video capture equipment, determines human body key point corresponding with target user
The full-length of line can be closed according to the reference video image of the target user of acquisition, the human body of measured in advance target user
The physical length of key point line.
Fig. 3 is a kind of flow chart for motion capture method that the embodiment of the present disclosure provides.The present embodiment can be with above-mentioned one
Each optinal plan combines in a or multiple embodiments, in the present embodiment, determines the video frame of target user's live video
In human body key point, may include: that human body key point in the video frame to target user's live video identifies, determine
The picture position of human body key point.
And the parameter according to video capture equipment, and the standard of human body key point line corresponding with target user
Length determines spatial position of the human body key point in video capture device coordinate system in video frame, may include: according to people
The picture position of body key point and the parameter of video capture equipment, determine using video capture device location as starting point and with
The corresponding ray in the picture position of human body key point;According to ray, and human body key point line corresponding with target user
Full-length obtains spatial position of the human body key point in video capture device coordinate system in video frame.
And determine the human body key point in video frame after the spatial position in video capture device coordinate system,
It can be with further include: according to spatial position of the human body key point in video capture device coordinate system in video frame, control and mesh
The movement for marking the corresponding virtual portrait role playing target user of user, obtains real-time virtual personage's video image;In video frame
Middle addition real-time virtual personage's video image, obtains mixed video frame;Mixed video frame is uploaded to live streaming platform.
As shown in figure 3, this method may include steps of:
Step 301 identifies the human body key point in the video frame of target user's live video, determines human body key
The picture position of point.
Wherein, after obtaining the video frame taken, image recognition is carried out to video frame, identifies the people for including in video frame
Body key point.Optionally, human body key point may include: head, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist.With
The picture centre of video frame is that origin establishes image coordinate system, obtains coordinate bit of whole human body key points in image coordinate system
It sets.Coordinate position of each human body key point in image coordinate system is determined as to the picture position of each human body key point.
Step 302, according to the picture position of human body key point and the parameter of video capture equipment, determination is clapped with video
Taking the photograph device location is starting point and ray corresponding with the picture position of human body key point.
Wherein, video capture device coordinate system is established according to the parameter of video capture equipment.In video capture device coordinate
In system, the picture position and video capture device location of each human body key point are attached respectively, obtain setting with video capture
Standby position is the ray of starting point and the picture position Jing Guo corresponding human body key point, i.e., is with video capture device location
Hair point and ray corresponding with the picture position of human body key point.For example, obtain using video capture device location as starting point and
By the ray of the picture position on head, using video capture device location as starting point and penetrating by the picture position of neck
Line, by starting point of video capture device location and by the ray of the picture position of left shoulder, with video capture device location be
Starting point and by right shoulder picture position ray, using video capture device location be starting point and process left elbow image position
The ray set, using video capture device location as starting point and by the ray of the picture position of right elbow, with video capture equipment
Position is starting point and by the ray of the picture position of left wrist and by starting point of video capture device location and by right
The ray of the picture position of wrist.
The picture position of each human body key point is the company of each the human body key point and video capture device location of target user
The intersection point of line and imaging plane.Hence, it can be determined that each human body key point of target user is in video capture device location
On the ray of picture position for starting point and Jing Guo corresponding human body key point.
Step 303, according to ray, and the full-length of human body key point line corresponding with target user, depending on
Spatial position of the human body key point in video capture device coordinate system in frequency frame.
Optionally, human body key point line include: line between line, neck and the left shoulder between head and neck,
Line, left elbow between line, left shoulder between neck and right shoulder and the line between left elbow, right shoulder and right elbow and between left wrist
Line and right elbow and right wrist between line.
Optionally, according to ray, and the full-length of human body key point line corresponding with target user, obtain video
Spatial position of the human body key point in video capture device coordinate system in frame may include: according to video capture equipment
Position is that the standard of starting point and the line between three rays corresponding with neck, left shoulder, right shoulder, neck and left shoulder respectively is long
The full-length of degree and the line between neck and right shoulder, determines neck, left shoulder, right shoulder in video capture device coordinate system
In spatial position;According to spatial position of the neck in video capture device coordinate system, line between head and neck
Full-length, and using video capture device location as starting point and ray corresponding with head, determine head in video capture
Spatial position in device coordinate system;According to spatial position of the left shoulder in video capture device coordinate system, left shoulder and left elbow it
Between line full-length, and using video capture device location as starting point and ray corresponding with left elbow, determine left elbow
Spatial position in video capture device coordinate system;It is right according to spatial position of the right shoulder in video capture device coordinate system
The full-length of line between shoulder and right elbow, and as starting point and corresponding with right elbow penetrated using video capture device location
Line determines spatial position of the right elbow in video capture device coordinate system;According to left elbow in video capture device coordinate system
Spatial position, the full-length of the line between left elbow and left wrist, and using video capture device location as starting point and with a left side
The corresponding ray of wrist determines spatial position of the left wrist in video capture device coordinate system;According to right elbow in video capture equipment
Spatial position in coordinate system, the full-length of the line between right elbow and right wrist, and with video capture device location for out
Hair point and ray corresponding with right wrist, determine spatial position of the right wrist in video capture device coordinate system.
The neck of human body, left shoulder, right shoulder are usually located on same straight line.Optionally, according to video capture device location
For starting point and respectively the full-length of the line between three rays corresponding with neck, left shoulder, right shoulder, neck and left shoulder,
And the full-length of the line between neck and right shoulder, determine neck, left shoulder, right shoulder in video capture device coordinate system
Spatial position may include: according to using video capture device location as starting point and corresponding with neck, left shoulder, right shoulder respectively
The full-length of line between the full-length and neck and right shoulder of line between three rays, neck and left shoulder, really
Determine straight line.The straight line and using video capture device location as starting point and three corresponding with neck, left shoulder, right shoulder respectively
Ray intersection, and between the intersection point of the intersection point and straight line ray corresponding with left shoulder of the straight line and the corresponding ray of neck
The length of line is equal to the full-length of line between neck and left shoulder, the intersection point of straight line ray corresponding with neck and this
The length of line between the intersection point of straight line ray corresponding with right shoulder is equal to the full-length of the line between neck and right shoulder.
The intersection point of straight line ray corresponding with neck is the neck of target user.The intersection point of straight line ray corresponding with neck
Coordinate position is spatial position of the neck of target user in video capture device coordinate system.The straight line is corresponding with left shoulder
The intersection point of ray is the left shoulder of target user.The coordinate position of the intersection point of straight line ray corresponding with left shoulder is that target is used
Spatial position of the left shoulder at family in video capture device coordinate system.The intersection point of straight line ray corresponding with right shoulder is target
The left shoulder of user.The coordinate position of the intersection point of straight line ray corresponding with right shoulder is the right shoulder of target user in video capture
Spatial position in device coordinate system.
Optionally, the spatial position according to neck in video capture device coordinate system, the line between head and neck
Full-length determine that head is clapped in video and using video capture device location as starting point and ray corresponding with head
The spatial position in device coordinate system is taken the photograph, may include: using video capture device location as starting point and corresponding with head
On ray, the determining company being equal at a distance from the spatial position in video capture device coordinate system between head and neck with neck
Two alternative three-dimensional coordinate points of the full-length of line.Before an alternative three-dimensional coordinate point in two alternative three-dimensional coordinate points is
The corresponding three-dimensional coordinate point in head under state of inclining, another is the corresponding three-dimensional coordinate point in head under retroverted state.To view
Frequency frame carries out image recognition, identifies that the head of target user in video frame is forward-lean state or retroverted state.If video frame
The head of middle target user is forward-lean state, then the corresponding alternative three-dimensional coordinate point in the head under forward-lean state is determined as target
The head of user.The coordinate position of the alternative three-dimensional coordinate point is the head of target user in video capture device coordinate system
Spatial position.It is if the head of target user is retroverted state in video frame, the head under retroverted state is corresponding standby
Three-dimensional coordinate point is selected to be determined as the head of target user.The coordinate position of the alternative three-dimensional coordinate point is the head of target user
Spatial position in video capture device coordinate system.
Optionally, the spatial position according to left shoulder in video capture device coordinate system, the line between left shoulder and left elbow
Full-length determine that left elbow is clapped in video and using video capture device location as starting point and ray corresponding with left elbow
The spatial position in device coordinate system is taken the photograph, may include: using video capture device location as starting point and corresponding with left elbow
On ray, the determining distance with left shoulder in the spatial position in video capture device coordinate system is equal to the company between left shoulder and left elbow
Two alternative three-dimensional coordinate points of the full-length of line.Before an alternative three-dimensional coordinate point in two alternative three-dimensional coordinate points is
The corresponding three-dimensional coordinate point of left elbow under state of inclining, another is the corresponding three-dimensional coordinate point of left elbow under retroverted state.Will before
The corresponding alternative three-dimensional coordinate point of left elbow under state of inclining is determined as the left elbow of target user.The coordinate of the alternative three-dimensional coordinate point
Position is spatial position of the left elbow of target user in video capture device coordinate system.
Optionally, the spatial position according to right shoulder in video capture device coordinate system, the line between right shoulder and right elbow
Full-length determine that right elbow is clapped in video and using video capture device location as starting point and ray corresponding with right elbow
The spatial position in device coordinate system is taken the photograph, may include: using video capture device location as starting point and corresponding with right elbow
On ray, the determining distance with right shoulder in the spatial position in video capture device coordinate system is equal to the company between right shoulder and right elbow
Two alternative three-dimensional coordinate points of the full-length of line.Before an alternative three-dimensional coordinate point in two alternative three-dimensional coordinate points is
The corresponding three-dimensional coordinate point of right elbow under state of inclining, another is the corresponding three-dimensional coordinate point of right elbow under retroverted state.Will before
The corresponding alternative three-dimensional coordinate point of right elbow under state of inclining is determined as the right elbow of target user.The coordinate of the alternative three-dimensional coordinate point
Position is spatial position of the right elbow of target user in video capture device coordinate system.
Optionally, the spatial position according to left elbow in video capture device coordinate system, the line between left elbow and left wrist
Full-length determine that left wrist is clapped in video and using video capture device location as starting point and ray corresponding with left wrist
The spatial position in device coordinate system is taken the photograph, may include: using video capture device location as starting point and corresponding with left wrist
On ray, the determining distance with left elbow in the spatial position in video capture device coordinate system is equal to the company between left elbow and left wrist
Two alternative three-dimensional coordinate points of the full-length of line.Before an alternative three-dimensional coordinate point in two alternative three-dimensional coordinate points is
The corresponding three-dimensional coordinate point of left wrist under state of inclining, another is the corresponding three-dimensional coordinate point of left wrist under retroverted state.Will before
The corresponding alternative three-dimensional coordinate point of left wrist under state of inclining is determined as the left wrist of target user.The coordinate of the alternative three-dimensional coordinate point
Position is spatial position of the left wrist of target user in video capture device coordinate system.
Optionally, the spatial position according to right elbow in video capture device coordinate system, the line between right elbow and right wrist
Full-length determine that right wrist is clapped in video and using video capture device location as starting point and ray corresponding with right wrist
The spatial position in device coordinate system is taken the photograph, may include: using video capture device location as starting point and corresponding with right wrist
On ray, the determining distance with right elbow in the spatial position in video capture device coordinate system is equal to the company between right elbow and right wrist
Two alternative three-dimensional coordinate points of the full-length of line.Before an alternative three-dimensional coordinate point in two alternative three-dimensional coordinate points is
The corresponding three-dimensional coordinate point of right wrist under state of inclining, another is the corresponding three-dimensional coordinate point of right wrist under retroverted state.Will before
The corresponding alternative three-dimensional coordinate point of left wrist under state of inclining is determined as the right wrist of target user.The coordinate of the alternative three-dimensional coordinate point
Position is spatial position of the right wrist of target user in video capture device coordinate system.
Step 304, according to spatial position of the human body key point in video capture device coordinate system in video frame, control
The movement of virtual portrait role playing target user corresponding with target user, obtains real-time virtual personage's video image.
Optionally, virtual portrait role can be various animation cartoon characters.Pre-establish the three-dimensional of virtual portrait role
Model.The threedimensional model of virtual portrait role has key point corresponding with human body key point.
Optionally, according to spatial position of the human body key point in video capture device coordinate system in video frame, control
The movement of virtual portrait role playing target user corresponding with target user, comprising: according to the human body key point in video frame
Spatial position in video capture device coordinate system determines the threedimensional model of virtual portrait role corresponding with target user
Position of each key point in video capture device coordinate system obtains real-time virtual personage's video image corresponding with video frame.
Real-time virtual personage video image is added in step 305 in the video frame, obtains mixed video frame.
Optionally, real-time virtual personage's video and graph compound is obtained into mixed video frame into video frame.Mixed video frame
In target user image by real-time virtual personage's video image cover.
Mixed video frame is uploaded to live streaming platform by step 306.
Wherein, it joined real-time virtual personage's video image of virtual portrait role in mixed video frame, realize with virtual
Character is the live streaming of main broadcaster.
The technical solution of the present embodiment, by according to the picture position of human body key point and the ginseng of video capture equipment
Number is determined using video capture device location as starting point and ray corresponding with the picture position of human body key point, and according to penetrating
Line, and the full-length of human body key point line corresponding with target user, the human body key point obtained in video frame are regarding
Spatial position in frequency capture apparatus coordinate system, then according to the human body key point in video frame in video capture device coordinate system
In spatial position, control the movement of corresponding with target user virtual portrait role playing target user, obtain real-time virtual
Real-time virtual personage video image is added in the video frame, obtains mixed video frame for personage's video image, and by mixed video frame
It is uploaded to live streaming platform, it can be according to the picture position of human body key point, the parameter of video capture equipment and and target user
The full-length of corresponding human body key point line obtains the human body key point in video frame in video capture device coordinate system
Spatial position, can according to spatial position of the human body key point in video capture device coordinate system in video frame, control
The movement of virtual portrait role playing target user corresponding with target user realizes with virtual portrait role to be the straight of main broadcaster
It broadcasts.
Fig. 4 is a kind of structural schematic diagram for motion capture device that the embodiment of the present disclosure provides.The present embodiment is applicable to
The case where identifying the human action in video frame.The device can realize by the way of software and/or hardware, which can be with
It is configured at mobile terminal.As shown in figure 4, the apparatus may include: key point determining module 401 and spatial position determining module
402。
Wherein, key point determining module 401, the human body in the video frame for determining target user's live video are crucial
Point;Spatial position determining module 402, for the parameter according to video capture equipment, and human body corresponding with target user closes
The full-length of key point line determines spatial position of the human body key point in video capture device coordinate system in video frame.
The technical solution of the present embodiment, the human body key point in video frame by determining target user's live video, so
Afterwards according to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user, determine view
Spatial position of the human body key point in video capture device coordinate system in frequency frame, can be according to the parameter of video capture equipment
With the full-length of human body key point line, determine that the human body key point in the video frame of target user's live video is clapped in video
The spatial position in device coordinate system is taken the photograph, to accurately identify the human action in video frame.
It optionally, based on the above technical solution, can be with further include: image collection module is used for obtaining target
The reference video image at family;Wherein, the movement of the target user in reference video image is preset standard movement, and preset standard is dynamic
Work is whole human body key points in the movement of same vertical plane;Full-length determining module, for according to reference video figure
The parameter of picture and video capture equipment determines the full-length of human body key point line corresponding with target user.
Optionally, based on the above technical solution, full-length determining module may include: that first position determines list
Member determines the picture position of human body key point for identifying to the human body key point in reference video image;First ray
Determination unit, for according to the picture position of human body key point and the parameter of video capture equipment, determination to be set with video capture
Standby position is starting point and ray corresponding with the picture position of human body key point;Head image acquiring unit, for reference
Video image is identified, the head image of target user is obtained;Depth determining unit, for according to preset head image with
The corresponding relationship of depth, the determining matched depth of head image with target user;Full-length determination unit is penetrated for basis
Line, and the matched depth of head image with target user, determine human body key point in video capture device coordinate system
Spatial position;According to spatial position, the full-length of human body key point line corresponding with target user is determined.
Optionally, based on the above technical solution, key point determining module 401 may include: that the second position determines
Unit identifies for the human body key point in the video frame to target user's live video, determines the figure of human body key point
Image position;Spatial position determining module 402 may include: the second ray determination unit, for the image according to human body key point
The parameter of position and video capture equipment determines the figure using video capture device location as starting point and with human body key point
The corresponding ray in image position;Spatial position determination unit is used for according to ray, and human body key point corresponding with target user
The full-length of line obtains spatial position of the human body key point in video capture device coordinate system in video frame.
Optionally, based on the above technical solution, human body key point may include: head, neck, left shoulder, right shoulder,
Left elbow, right elbow, left wrist, right wrist;Human body key point line may include: line between head and neck, neck and left shoulder it
Between line, the line between neck and right shoulder, the line between left shoulder and left elbow, the line between right shoulder and right elbow, left elbow
The line between line and right elbow and right wrist between left wrist.
Optionally, based on the above technical solution, spatial position determination unit may include: that the first determining son is single
Member, for according to using video capture device location as starting point and respectively corresponding with neck, left shoulder, right shoulder three rays, necks
The full-length of line between the full-length and neck and right shoulder of the sub line between left shoulder, determines neck, a left side
The spatial position of shoulder, right shoulder in video capture device coordinate system;Second determines subelement, is used for according to neck in video capture
Spatial position in device coordinate system, the full-length of the line between head and neck, and with video capture device location
For starting point and ray corresponding with head, spatial position of the head in video capture device coordinate system is determined;Third determines
Subelement, for the spatial position according to left shoulder in video capture device coordinate system, the mark of the line between left shoulder and left elbow
Standard length, and using video capture device location as starting point and ray corresponding with left elbow, determine that left elbow is set in video capture
Spatial position in standby coordinate system;4th determines subelement, for the space according to right shoulder in video capture device coordinate system
Position, the full-length of the line between right shoulder and right elbow, and using video capture device location as starting point and with right elbow pair
The ray answered determines spatial position of the right elbow in video capture device coordinate system;5th determines subelement, for according to left elbow
Spatial position in video capture device coordinate system, the full-length of the line between left elbow and left wrist, and clapped with video
Taking the photograph device location is starting point and ray corresponding with left wrist, determines space bit of the left wrist in video capture device coordinate system
It sets;6th determines subelement, for the spatial position according to right elbow in video capture device coordinate system, between right elbow and right wrist
Line full-length, and using video capture device location as starting point and ray corresponding with right wrist, determine that right wrist exists
Spatial position in video capture device coordinate system.
It optionally, based on the above technical solution, can be with further include: action simulation module, for according to video frame
In spatial position of the human body key point in video capture device coordinate system, control visual human's object angle corresponding with target user
The movement of color simulated target user obtains real-time virtual personage's video image;Module is added in image, for being added in the video frame
Real-time virtual personage's video image, obtains mixed video frame;Video frame uploading module, for mixed video frame to be uploaded to live streaming
Platform.
Motion capture side provided by the embodiment of the present disclosure can be performed in motion capture device provided by the embodiment of the present disclosure
Method has the corresponding functional module of execution method and beneficial effect.
Below with reference to Fig. 5, it illustrates the structural representations for the mobile terminal 500 for being suitable for being used to realize the embodiment of the present disclosure
Figure.Mobile terminal in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting and connect
Receive device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle
Carry navigation terminal) etc..Mobile terminal shown in Fig. 5 is only an example, function to the embodiment of the present disclosure and should not be made
With range band come any restrictions.
As shown in figure 5, mobile terminal 500 may include processing unit (such as central processing unit, graphics processor etc.)
501, random access can be loaded into according to the program being stored in read-only memory (ROM) 502 or from storage device 506
Program in memory (RAM) 503 and execute various movements appropriate and processing.In RAM 503, it is also stored with mobile terminal
Various programs and data needed for 500 operations.Processing unit 501, ROM 502 and RAM 503 pass through the phase each other of bus 504
Even.Input/output (I/O) interface 505 is also connected to bus 504.
In general, following device can connect to I/O interface 505: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 506 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 507 of dynamic device etc.;Storage device 506 including such as tape, hard disk etc.;And communication device 509.Communication device
509, which can permit mobile terminal 500, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 5 shows tool
There is the mobile terminal 500 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising being carried on non-transient computer can
The computer program on medium is read, which includes the program code for method shown in execution flow chart.At this
In the embodiment of sample, which can be downloaded and installed from network by communication device 509, or be filled from storage
It sets 506 to be mounted, or is mounted from ROM 502.When the computer program is executed by processing unit 501, the disclosure is executed
The above-mentioned function of being limited in the method for embodiment.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
In some embodiments, client, server can use such as HTTP (HyperText Transfer
Protocol, hypertext transfer protocol) etc the network protocols of any currently known or following research and development communicated, and can
To be interconnected with the digital data communications (for example, communication network) of arbitrary form or medium.The example of communication network includes local area network
(" LAN "), wide area network (" WAN "), Internet (for example, internet) and ad-hoc network are (for example, the end-to-end net of ad hoc
Network) and any currently known or following research and development network.
Above-mentioned computer-readable medium can be included in above-mentioned mobile terminal;It is also possible to individualism, and not
It is fitted into the mobile terminal.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the shifting
When dynamic terminal executes, so that the mobile terminal: determining the human body key point in the video frame of target user's live video;According to view
The parameter of frequency capture apparatus, and the full-length of human body key point line corresponding with target user, determine in video frame
Spatial position of the human body key point in video capture device coordinate system.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, above procedure design language include but is not limited to object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing, illustrate according to the method, apparatus of the various embodiments of the disclosure, mobile terminal and
The architecture, function and operation in the cards of computer program product.In this regard, each side in flowchart or block diagram
Frame can represent a part of a module, program segment or code, and a part of the module, program segment or code includes one
Or multiple executable instructions for implementing the specified logical function.It should also be noted that in some implementations as replacements, side
The function of being marked in frame can also occur in a different order than that indicated in the drawings.For example, two sides succeedingly indicated
Frame can actually be basically executed in parallel, they can also be executed in the opposite order sometimes, this according to related function and
It is fixed.It is also noted that the group of each box in block diagram and or flow chart and the box in block diagram and or flow chart
It closes, can be realized with the dedicated hardware based system for executing defined functions or operations, or specialized hardware can be used
Combination with computer instruction is realized.
Being described in module, unit and subelement involved in the embodiment of the present disclosure can be real by way of software
It is existing, it can also be realized by way of hardware.Wherein, the title of module, unit or subelement is not under certain conditions
The restriction to the module, unit or subelement itself is constituted, for example, image collection module is also described as " obtaining mesh
Mark the module of the reference video image of user ", first position determination unit is also described as " in reference video image
Human body key point is identified, determines the unit of the picture position of human body key point ", first determines that subelement can also be described
For " according to using video capture device location as starting point and respectively three rays corresponding with neck, left shoulder, right shoulder, neck with
The full-length of line between the full-length and neck and right shoulder of line between left shoulder, determines neck, left shoulder, the right side
The subelement of spatial position of the shoulder in video capture device coordinate system ".
Function described herein can be executed at least partly by one or more hardware logic components.Example
Such as, without limitation, the hardware logic component for the exemplary type that can be used include: field programmable gate array (FPGA), specially
With integrated circuit (ASIC), Application Specific Standard Product (ASSP), system on chip (SOC), complex programmable logic equipment (CPLD) etc.
Deng.
In the context of the disclosure, machine readable media can be tangible medium, may include or is stored for
The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can
Reading medium can be machine-readable signal medium or machine-readable storage medium.Machine readable media can include but is not limited to electricity
Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or above content any conjunction
Suitable combination.The more specific example of machine readable storage medium will include the electrical connection of line based on one or more, portable meter
Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM
Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or
Any appropriate combination of above content.
According to one or more other embodiments of the present disclosure, example one provides a kind of motion capture method, comprising:
Determine the human body key point in the video frame of target user's live video;
According to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user,
Determine spatial position of the human body key point in video capture device coordinate system in the video frame.
According to one or more other embodiments of the present disclosure, example two provides a kind of motion capture method, in example one
On the basis of motion capture method, before the human body key point in the video frame for determining target user's live video, further includes:
Obtain the reference video image of target user;Wherein, the movement of the target user in the reference video image is
Preset standard movement, the preset standard movement is whole human body key points in the movement of same vertical plane;
According to the reference video image and the parameter of video capture equipment, determination is corresponding with the target user
The full-length of human body key point line.
According to one or more other embodiments of the present disclosure, example three provides a kind of motion capture method, in example two
It is described according to the reference video image and the parameter of video capture equipment on the basis of motion capture method, determining and mesh
Mark the full-length of the corresponding human body key point line of user, comprising:
Human body key point in the reference video image is identified, determines the image position of the human body key point
It sets;
According to the picture position of the human body key point and the parameter of video capture equipment, determination is set with video capture
Standby position is starting point and ray corresponding with the picture position of the human body key point;
The reference video image is identified, the head image of the target user is obtained;
According to the corresponding relationship of preset head image and depth, determination is matched with the head image of the target user
Depth;
According to the ray and the described and target user matched depth of head image, the human body is determined
Spatial position of the key point in video capture device coordinate system;
According to the spatial position, the full-length of human body key point line corresponding with the target user is determined.
According to one or more other embodiments of the present disclosure, example four provides a kind of motion capture method, in example one
Human body key point on the basis of motion capture method, in the video frame of determining target user's live video, comprising:
Human body key point in the video frame of target user's live video is identified, determines the human body key point
Picture position;
The parameter according to video capture equipment, and the standard of human body key point line corresponding with target user are long
Degree, determines spatial position of the human body key point in video capture device coordinate system in the video frame, comprising:
According to the picture position of the human body key point and the parameter of video capture equipment, determination is set with video capture
Standby position is starting point and ray corresponding with the picture position of the human body key point;
According to the ray, and the full-length of human body key point line corresponding with target user, obtain the view
Spatial position of the human body key point in video capture device coordinate system in frequency frame.
According to one or more other embodiments of the present disclosure, example five provides a kind of motion capture method, in example four
On the basis of motion capture method, the human body key point includes: head, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, the right side
Wrist;
The human body key point line includes: line, the neck between line, neck and left shoulder between head and neck
Line, left elbow between line, left shoulder between right shoulder and the line between left elbow, right shoulder and right elbow and the company between left wrist
Line between line and right elbow and right wrist.
According to one or more other embodiments of the present disclosure, example six provides a kind of motion capture method, in example five
It is described according to the ray on the basis of motion capture method, and the mark of human body key point line corresponding with target user
Standard length obtains spatial position of the human body key point in video capture device coordinate system in the video frame, comprising:
According to using video capture device location as starting point and respectively three rays corresponding with neck, left shoulder, right shoulder, institute
The full-length of the line between the full-length and the neck and right shoulder of the line between neck and left shoulder is stated, is determined
The spatial position of the neck, the left shoulder, the right shoulder in video capture device coordinate system;
Line according to spatial position of the neck in video capture device coordinate system, between the head and neck
Full-length, and using video capture device location as starting point and ray corresponding with head, determine that the head is regarding
Spatial position in frequency capture apparatus coordinate system;
Line according to spatial position of the left shoulder in video capture device coordinate system, between the left shoulder and left elbow
Full-length, and using video capture device location as starting point and ray corresponding with left elbow, determine that the left elbow is regarding
Spatial position in frequency capture apparatus coordinate system;
Line according to spatial position of the right shoulder in video capture device coordinate system, between the right shoulder and right elbow
Full-length, and using video capture device location as starting point and ray corresponding with right elbow, determine that the right elbow is regarding
Spatial position in frequency capture apparatus coordinate system;
Line according to spatial position of the left elbow in video capture device coordinate system, between the left elbow and left wrist
Full-length, and using video capture device location as starting point and ray corresponding with left wrist, determine that the left wrist is regarding
Spatial position in frequency capture apparatus coordinate system;
Line according to spatial position of the right elbow in video capture device coordinate system, between the right elbow and right wrist
Full-length, and using video capture device location as starting point and ray corresponding with right wrist, determine that the right wrist is regarding
Spatial position in frequency capture apparatus coordinate system.
According to one or more other embodiments of the present disclosure, example seven provides a kind of motion capture method, in example one
On the basis of motion capture method, sky of the human body key point in video capture device coordinate system in the video frame is being determined
Between after position, further includes:
According to spatial position of the human body key point in video capture device coordinate system in the video frame, control and institute
The movement for stating target user described in the corresponding virtual portrait role playing of target user, obtains real-time virtual personage's video image;
The real-time virtual personage video image is added in the video frame, obtains mixed video frame;
The mixed video frame is uploaded to live streaming platform.
According to one or more other embodiments of the present disclosure, example eight provides a kind of motion capture device, comprising:
Key point determining module, the human body key point in video frame for determining target user's live video;
Spatial position determining module, for the parameter according to video capture equipment, and human body corresponding with target user
The full-length of key point line determines space of the human body key point in video capture device coordinate system in the video frame
Position.
According to one or more other embodiments of the present disclosure, example nine provides a kind of mobile terminal, comprising:
One or more processing units;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processing units, so that one or more of places
Manage motion capture method of the device realization as described in any in example one to seven.
According to one or more other embodiments of the present disclosure, example ten provides a kind of computer readable storage medium, thereon
It is stored with computer program, realizes that the movement as described in any in example one to seven is caught when which is executed by processor
Catch method.
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Although this is not construed as requiring these operations with institute in addition, depicting each operation using certain order
The certain order that shows executes in sequential order to execute.Under certain environment, multitask and parallel processing may be advantageous
's.Similarly, although containing several specific implementation details in being discussed above, these are not construed as to this public affairs
The limitation for the range opened.Certain features described in the context of individual embodiment can also be realized in combination single real
It applies in example.On the contrary, the various features described in the context of single embodiment can also be individually or with any suitable
The mode of sub-portfolio is realized in various embodiments.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer
When understanding that theme defined in the appended claims is not necessarily limited to special characteristic described above or movement.On on the contrary,
Special characteristic described in face and movement are only to realize the exemplary forms of claims.
Claims (10)
1. a kind of motion capture method characterized by comprising
Determine the human body key point in the video frame of target user's live video;
According to the parameter of video capture equipment, and the full-length of human body key point line corresponding with target user, it determines
Spatial position of the human body key point in video capture device coordinate system in the video frame.
2. the method according to claim 1, wherein the people in the video frame for determining target user's live video
Before body key point, further includes:
Obtain the reference video image of target user;Wherein, the movement of the target user in the reference video image is default
Standard operation, the preset standard movement is whole human body key points in the movement of same vertical plane;
According to the reference video image and the parameter of video capture equipment, human body corresponding with the target user is determined
The full-length of key point line.
3. according to the method described in claim 2, it is characterized in that, described clap according to the reference video image and video
The parameter of equipment is taken the photograph, determines the full-length of human body key point line corresponding with target user, comprising:
Human body key point in the reference video image is identified, determines the picture position of the human body key point;
According to the picture position of the human body key point and the parameter of video capture equipment, determine with video capture equipment position
It is set to starting point and ray corresponding with the picture position of the human body key point;
The reference video image is identified, the head image of the target user is obtained;
According to the corresponding relationship of preset head image and depth, the determining matched depth of head image with the target user
Degree;
According to the ray and the described and target user matched depth of head image, determine that the human body is crucial
Spatial position of the point in video capture device coordinate system;
According to the spatial position, the full-length of human body key point line corresponding with the target user is determined.
4. the method according to claim 1, wherein in the video frame of determining target user's live video
Human body key point, comprising:
Human body key point in the video frame of target user's live video is identified, determines the image of the human body key point
Position;
The parameter according to video capture equipment, and the full-length of human body key point line corresponding with target user,
Determine spatial position of the human body key point in video capture device coordinate system in the video frame, comprising:
According to the picture position of the human body key point and the parameter of video capture equipment, determine with video capture equipment position
It is set to starting point and ray corresponding with the picture position of the human body key point;
According to the ray, and the full-length of human body key point line corresponding with target user, obtain the video frame
In spatial position of the human body key point in video capture device coordinate system.
5. according to the method described in claim 4, it is characterized in that, the human body key point includes: head, neck, left shoulder, the right side
Shoulder, left elbow, right elbow, left wrist, right wrist;
The human body key point line includes: line, neck and the right side between line, neck and left shoulder between head and neck
Line, left elbow between line, left shoulder between shoulder and the line between left elbow, right shoulder and right elbow and the line between left wrist,
And the line between right elbow and right wrist.
6. according to the method described in claim 5, it is characterized in that, described according to the ray and corresponding with target user
Human body key point line full-length, obtain the human body key point in the video frame in video capture device coordinate system
Spatial position, comprising:
According to using video capture device location as starting point and respectively three rays corresponding with neck, left shoulder, right shoulder, the neck
The full-length of line between the full-length and the neck and right shoulder of the sub line between left shoulder, determine described in
The spatial position of neck, the left shoulder, the right shoulder in video capture device coordinate system;
According to spatial position of the neck in video capture device coordinate system, the mark of the line between the head and neck
Standard length, and using video capture device location as starting point and ray corresponding with head, determine that the head is clapped in video
Take the photograph the spatial position in device coordinate system;
According to spatial position of the left shoulder in video capture device coordinate system, the mark of the line between the left shoulder and left elbow
Standard length, and using video capture device location as starting point and ray corresponding with left elbow, determine that the left elbow is clapped in video
Take the photograph the spatial position in device coordinate system;
According to spatial position of the right shoulder in video capture device coordinate system, the mark of the line between the right shoulder and right elbow
Standard length, and using video capture device location as starting point and ray corresponding with right elbow, determine that the right elbow is clapped in video
Take the photograph the spatial position in device coordinate system;
According to spatial position of the left elbow in video capture device coordinate system, the mark of the line between the left elbow and left wrist
Standard length, and using video capture device location as starting point and ray corresponding with left wrist, determine that the left wrist is clapped in video
Take the photograph the spatial position in device coordinate system;
According to spatial position of the right elbow in video capture device coordinate system, the mark of the line between the right elbow and right wrist
Standard length, and using video capture device location as starting point and ray corresponding with right wrist, determine that the right wrist is clapped in video
Take the photograph the spatial position in device coordinate system.
7. the method according to claim 1, wherein determining the human body key point in the video frame in video
After spatial position in capture apparatus coordinate system, further includes:
According to spatial position of the human body key point in video capture device coordinate system in the video frame, control and the mesh
The movement for marking target user described in the corresponding virtual portrait role playing of user, obtains real-time virtual personage's video image;
The real-time virtual personage video image is added in the video frame, obtains mixed video frame;
The mixed video frame is uploaded to live streaming platform.
8. a kind of motion capture device characterized by comprising
Key point determining module, the human body key point in video frame for determining target user's live video;
Spatial position determining module, for the parameter according to video capture equipment, and human body corresponding with target user is crucial
The full-length of point line, determines space bit of the human body key point in video capture device coordinate system in the video frame
It sets.
9. a kind of mobile terminal, which is characterized in that the mobile terminal includes:
One or more processing units;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processing units, so that one or more of processing fill
Set the motion capture method realized as described in any in claim 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The motion capture method as described in any in claim 1-7 is realized when processor executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910611391.3A CN110225400B (en) | 2019-07-08 | 2019-07-08 | Motion capture method and device, mobile terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910611391.3A CN110225400B (en) | 2019-07-08 | 2019-07-08 | Motion capture method and device, mobile terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110225400A true CN110225400A (en) | 2019-09-10 |
CN110225400B CN110225400B (en) | 2022-03-04 |
Family
ID=67812874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910611391.3A Active CN110225400B (en) | 2019-07-08 | 2019-07-08 | Motion capture method and device, mobile terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110225400B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111523408A (en) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | Motion capture method and device |
CN112753210A (en) * | 2020-04-26 | 2021-05-04 | 深圳市大疆创新科技有限公司 | Movable platform, control method thereof and storage medium |
CN113344999A (en) * | 2021-06-28 | 2021-09-03 | 北京市商汤科技开发有限公司 | Depth detection method and device, electronic equipment and storage medium |
CN113743237A (en) * | 2021-08-11 | 2021-12-03 | 北京奇艺世纪科技有限公司 | Follow-up action accuracy determination method and device, electronic device and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140320606A1 (en) * | 2013-04-26 | 2014-10-30 | Bi2-Vision Co., Ltd. | 3d video shooting control system, 3d video shooting control method and program |
WO2015006431A1 (en) * | 2013-07-10 | 2015-01-15 | Faro Technologies, Inc. | Triangulation scanner having motorized elements |
CN105333818A (en) * | 2014-07-16 | 2016-02-17 | 浙江宇视科技有限公司 | 3D space measurement method based on monocular camera |
CN105389569A (en) * | 2015-11-17 | 2016-03-09 | 北京工业大学 | Human body posture estimation method |
CN106249508A (en) * | 2016-08-15 | 2016-12-21 | 广东欧珀移动通信有限公司 | Atomatic focusing method and system, filming apparatus |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN106839975A (en) * | 2015-12-03 | 2017-06-13 | 杭州海康威视数字技术股份有限公司 | Volume measuring method and its system based on depth camera |
CN108200446A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | Multimedia interactive system and method on the line of virtual image |
CN108986189A (en) * | 2018-06-21 | 2018-12-11 | 珠海金山网络游戏科技有限公司 | Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109740513A (en) * | 2018-12-29 | 2019-05-10 | 青岛小鸟看看科技有限公司 | A kind of analysis of operative action method and apparatus |
-
2019
- 2019-07-08 CN CN201910611391.3A patent/CN110225400B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140320606A1 (en) * | 2013-04-26 | 2014-10-30 | Bi2-Vision Co., Ltd. | 3d video shooting control system, 3d video shooting control method and program |
WO2015006431A1 (en) * | 2013-07-10 | 2015-01-15 | Faro Technologies, Inc. | Triangulation scanner having motorized elements |
CN105333818A (en) * | 2014-07-16 | 2016-02-17 | 浙江宇视科技有限公司 | 3D space measurement method based on monocular camera |
CN105389569A (en) * | 2015-11-17 | 2016-03-09 | 北京工业大学 | Human body posture estimation method |
CN106839975A (en) * | 2015-12-03 | 2017-06-13 | 杭州海康威视数字技术股份有限公司 | Volume measuring method and its system based on depth camera |
CN106249508A (en) * | 2016-08-15 | 2016-12-21 | 广东欧珀移动通信有限公司 | Atomatic focusing method and system, filming apparatus |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN108200446A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | Multimedia interactive system and method on the line of virtual image |
CN108986189A (en) * | 2018-06-21 | 2018-12-11 | 珠海金山网络游戏科技有限公司 | Method and system based on real time multi-human motion capture in three-dimensional animation and live streaming |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109740513A (en) * | 2018-12-29 | 2019-05-10 | 青岛小鸟看看科技有限公司 | A kind of analysis of operative action method and apparatus |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111523408A (en) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | Motion capture method and device |
CN111523408B (en) * | 2020-04-09 | 2023-09-15 | 北京百度网讯科技有限公司 | Motion capturing method and device |
CN112753210A (en) * | 2020-04-26 | 2021-05-04 | 深圳市大疆创新科技有限公司 | Movable platform, control method thereof and storage medium |
CN113344999A (en) * | 2021-06-28 | 2021-09-03 | 北京市商汤科技开发有限公司 | Depth detection method and device, electronic equipment and storage medium |
WO2023273498A1 (en) * | 2021-06-28 | 2023-01-05 | 上海商汤智能科技有限公司 | Depth detection method and apparatus, electronic device, and storage medium |
CN113743237A (en) * | 2021-08-11 | 2021-12-03 | 北京奇艺世纪科技有限公司 | Follow-up action accuracy determination method and device, electronic device and storage medium |
CN113743237B (en) * | 2021-08-11 | 2023-06-02 | 北京奇艺世纪科技有限公司 | Method and device for judging accuracy of follow-up action, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110225400B (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110225400A (en) | A kind of motion capture method, device, mobile terminal and storage medium | |
CN109902659B (en) | Method and apparatus for processing human body image | |
CN109508681A (en) | The method and apparatus for generating human body critical point detection model | |
CN110288049A (en) | Method and apparatus for generating image recognition model | |
CN110335334A (en) | Avatars drive display methods, device, electronic equipment and storage medium | |
CN110532981A (en) | Human body key point extracting method, device, readable storage medium storing program for executing and equipment | |
US20220386061A1 (en) | Audio processing method and apparatus, readable medium, and electronic device | |
CN109754464B (en) | Method and apparatus for generating information | |
WO2020253716A1 (en) | Image generation method and device | |
CN108594999A (en) | Control method and device for panoramic picture display systems | |
CN109615655A (en) | A kind of method and device, electronic equipment and the computer media of determining gestures of object | |
CN109683710B (en) | A kind of palm normal vector determines method, apparatus, equipment and storage medium | |
CN109583391A (en) | Critical point detection method, apparatus, equipment and readable medium | |
CN110035271B (en) | Fidelity image generation method and device and electronic equipment | |
CN110189394A (en) | Shape of the mouth as one speaks generation method, device and electronic equipment | |
CN112907652B (en) | Camera pose acquisition method, video processing method, display device, and storage medium | |
CN110047121A (en) | Animation producing method, device and electronic equipment end to end | |
JP2011043788A (en) | Information processing device, information processing method, and program | |
CN110334650A (en) | Object detecting method, device, electronic equipment and storage medium | |
WO2024094158A1 (en) | Special effect processing method and apparatus, device, and storage medium | |
CN109829431A (en) | Method and apparatus for generating information | |
CN113160270A (en) | Visual map generation method, device, terminal and storage medium | |
CN109816791B (en) | Method and apparatus for generating information | |
CN110084306A (en) | Method and apparatus for generating dynamic image | |
CN110060324A (en) | Image rendering method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |