CN107845129A - Three-dimensional reconstruction method and device, the method and device of augmented reality - Google Patents

Three-dimensional reconstruction method and device, the method and device of augmented reality Download PDF

Info

Publication number
CN107845129A
CN107845129A CN201711084000.4A CN201711084000A CN107845129A CN 107845129 A CN107845129 A CN 107845129A CN 201711084000 A CN201711084000 A CN 201711084000A CN 107845129 A CN107845129 A CN 107845129A
Authority
CN
China
Prior art keywords
feature point
motion
human body
personage
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711084000.4A
Other languages
Chinese (zh)
Inventor
宋亚楠
邱楠
万海
邓婧文
周游
程谦
刘海峡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Green Bristlegrass Intelligence Science And Technology Ltd
Original Assignee
Shenzhen Green Bristlegrass Intelligence Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Green Bristlegrass Intelligence Science And Technology Ltd filed Critical Shenzhen Green Bristlegrass Intelligence Science And Technology Ltd
Priority to CN201711084000.4A priority Critical patent/CN107845129A/en
Publication of CN107845129A publication Critical patent/CN107845129A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of three-dimensional reconstruction method and device, the method for the method and device of augmented reality, wherein augmented reality, for realizing augmented reality to three-dimensional image, including, Target Acquisition step, obtain the second personage in frame of video;Motion-captured step, catch the motion feature point of the second personage;Track obtaining step, the motion of motion feature point in video is tracked, to obtain the movement locus of each motion feature point;Track applying step, by the movement locus of each motion feature point, it is defined as corresponding to the movement locus of key point in three-dimensional image, and the movement locus between each two key point is estimated, to obtain the mass motion track of three-dimensional image, realize and the motion state of the second personage in video is applied to three-dimensional image.The present invention obtains the action of personage in video using motion tracking technology, then realizes the action of personage in virtual portrait reproducing video, can provide the user preferable interactive experience, and cost is relatively low.

Description

Three-dimensional reconstruction method and device, the method and device of augmented reality
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of three-dimensional reconstruction method and device, augmented reality Method and device.
Background technology
Internet cultural industry while cultural issues is provided for consumer, be developing progressively for market potential it is huge Industry.First, according to Quadratic Finite Element customer consumption situation it is recognised that with the steady growth of core Quadratic Finite Element user and general secondary First userbase it is increasingly huge, and the continuous lifting of the two disposal capital, the consuming capacity of Quadratic Finite Element user held Continuous lifting, so as to promote Quadratic Finite Element culture industry size rapid growth.Secondly, the lifting of Internet penetration and mobile network's ring The perfect of border makes user's acquisition information more convenient, further promotes introducing and the fast propagation of Quadratic Finite Element culture.
User receives the form of content from traditional forms such as caricature, game, animations, gradually develops into and virtual reality and increasing The new technologies such as strong reality combine even closer form.
In the true usage scenario of user, except by various modes by two dimension Quadratic Finite Element personage (that is, visual human Thing) three-dimensional display is done, the demand that Quadratic Finite Element personage carries out more action and behavior displayings also be present.
At present, the mode that the existing virtual portrait to two dimension carries out three-dimensional display is to virtual using line holographic projections technology Personage is shown, to realize three-dimensional display effect, but the content or two dimensional image substantially shown.
In addition, at present generally by designing, rendering a series of new actions, scene, to realize three-dimensional existing Virtual figure image provide the user more preferable experience applied to various scenes, this way needs to expend larger manpower thing Power, cost are high.
The content of the invention
The present invention is directed to propose a kind of three-dimensional reconstruction method and device, the method and device of augmented reality, above-mentioned to solve One of problem.
In a first aspect, the present invention provides a kind of three-dimensional reconstruction method, including,
Annotation step, rower is entered at least two two dimensional images of the first personage using human body feature point set in advance Note;
Splice step, the two dimensional image after multiple marks is spliced according to human body feature point;
Stitching step, spliced image is sutured using the common edge between human body feature point and human body feature point;
Reconstruction step, the image after suture is adapted to the threedimensional model built in advance, to obtain the first personage's Three-dimensional image.
Further, human body feature point has 21.
Further, step is spliced specifically, being carried out according to the ID of human body feature point to the two dimensional image after multiple marks Position is spliced, and the two dimensional image travel direction after multiple marks is spliced according to the direction of human body feature point.
Further, stitching step is specifically, using the common edge between human body feature point and human body feature point, and use Virtual suturing skill sutures according to triangle gridding suture way to spliced image.
Further, threedimensional model is corresponding with the first personage.
Second aspect, the present invention provide a kind of three-dimensionalreconstruction device, including,
Unit is marked, for being carried out using human body feature point set in advance at least two two dimensional images of the first personage Mark;
Concatenation unit, for being spliced according to human body feature point to multiple two dimensional images after mark;
Stapling unit, for being stitched using the common edge between human body feature point and human body feature point to spliced image Close;
Reconfiguration unit, for the image after suture to be adapted to the threedimensional model of the first personage built in advance, with Obtain the three-dimensional image of the first personage.
Three-dimensional reconstruction method and device provided by the invention, multiple two dimensional images of virtual portrait are labeled, spliced, Suture, reconstruct, to obtain the three-dimensional image of virtual portrait, then realize and three-dimensional display is carried out to virtual portrait.
The third aspect, the present invention provides a kind of method of augmented reality, for the three-dimensional shaped obtained to three-dimensional reconstruction method As realizing augmented reality, including,
Target Acquisition step, obtain the second personage in frame of video;
Motion-captured step, catch the motion feature point of the second personage;
Track obtaining step, the motion of motion feature point in video is tracked, to obtain the motion rail of each motion feature point Mark;
Track applying step, the movement locus of each motion feature point is defined as corresponding to key point in three-dimensional image Movement locus, and the movement locus between each two key point is estimated, to obtain the mass motion track of three-dimensional image, Realize and the motion state of the second personage in video is applied to three-dimensional image.
Further, the movement locus between each two key point is estimated using Bezier.
Further, key point is human body feature point corresponding with motion feature point position relationship in three-dimensional image.
Fourth aspect, the present invention provide a kind of device of augmented reality, including,
Target Acquisition unit, for obtaining the second personage in frame of video;
Motion-captured unit, for catching the motion feature point of the second personage;
Track acquiring unit, for tracking the motion of motion feature point in video, to obtain the fortune of each motion feature point Dynamic rail mark;
Track applying unit, it is corresponding crucial for by the movement locus of each motion feature point, being defined as in three-dimensional image The movement locus of point, and the movement locus between each two key point is estimated, to obtain the mass motion of three-dimensional image Track, realize and the motion state of the second personage in video is applied to three-dimensional image.
The method and device of augmented reality provided by the invention, the dynamic of personage in video is obtained using motion tracking technology Make, then realize the action of personage in virtual portrait reproducing video.It can realize and use virtual figure image in former video scene Existing figure image in substitution video, can provide the user preferable interactive experience, and cost is relatively low.
Brief description of the drawings
Fig. 1 is the flow chart of three-dimensional reconstruction method provided in an embodiment of the present invention;
Fig. 2 is the block diagram of three-dimensionalreconstruction device provided in an embodiment of the present invention;
Fig. 3 is the flow chart of the method for augmented reality provided in an embodiment of the present invention;
Fig. 4 is the block diagram of the device of augmented reality provided in an embodiment of the present invention.
Embodiment
The present invention is further illustrated below by specific embodiment, it should be understood, however, that, these embodiments are only It is used for specifically describing in more detail, and is not to be construed as limiting the present invention in any form.
Embodiment one
The three-dimensional reconstruction method provided with reference to Fig. 1, the present embodiment, including,
Annotation step S1, rower is entered at least two two dimensional images of the first personage using human body feature point set in advance Note;
Splice step S2, the two dimensional image after multiple marks is spliced according to human body feature point;
Stitching step S3, spliced image is stitched using the common edge between human body feature point and human body feature point Close;
Reconstruction step S4, the image after suture is adapted to the threedimensional model built in advance, to obtain the first personage Three-dimensional image.
Three-dimensional reconstruction method provided in an embodiment of the present invention, multiple two dimensional images of virtual portrait are labeled, spliced, Suture, reconstruct, to obtain the three-dimensional image of virtual portrait, then realize and three-dimensional display is carried out to virtual portrait.
It should be noted that in the present embodiment, after splicing to image, continue to spliced image using suture Technology is sutured, and after suture, obtained general image can keep image entirety to exist in the operations such as stretching, compression Uniformity in deformation.Furthermore, it is necessary to explanation, is not being sutured to spliced image, i.e. simply simple spelling Connect, reverse in picture, stretching when, it is possible that different piece deformation is different or occurs and fissions between different piece.
Preferably, human body feature point has 21.
In the present embodiment, in order to generate virtual portrait three-dimensional appearance, using default human body feature point respectively to each two Dimension image is labeled, afterwards when splicing to two dimensional image, using the human body feature point on each image as splicing Index point (corresponding points), then realizes and image is accurately spliced.
It should be noted that animation, field of play in order to ensure virtual portrait action fluency, it is attractive in appearance, design Convenience etc., can be according to physiology, the research of medical science, and the real needs made with reference to human body animation select human body feature point.This A little human body feature points are preset in advance (not arrived using Real time identification the methods of motion detection, machine learning), general next Say, the optic centre points and crown heel etc. such as artis, the eye nose larynx of human body can be selected to align key point as default human body Characteristic point.And it is preferable, in the present embodiment, default human body feature point is 21, and position corresponding to this 21 human body feature points Put respectively:Left heel, right crus of diaphragm with, left knee, right knee, left waist, right waist, low back, belly, the left centre of the palm, the right centre of the palm, the left palm The back of the body, the right palm back of the body, left hand elbow, right hand elbow, left shoulder, right shoulder, left ear, auris dextra, nose, hindbrain, the crown.Furthermore, it is necessary to explanation, The present embodiment is not especially limited to the specific number and particular location of human body feature point, can be set with reference to being actually needed It is fixed.
In addition, when being labeled using human body feature point to two dimensional image, marked content includes the ID of human body feature point, And direction of the human body feature point in two dimensional image.
It should be noted that the ID of human body feature point sets for each human body feature point of unique mark.Typically For, it is not necessary to pay special attention to the order and coding rule of ID numberings.But need to upload in network as information when being related to ID It is defeated or when be related to coding work, in order to ensure the validity of information transfer and save bandwidth and next code workload, Can be by setting ID coding rules.For example, left back waist can set ID as WBL (waist back left), can also set ID is WB_001 (wherein, WB is waist back abbreviation, and 001 represents left, and 002 represents right, during 000 represents), the present embodiment Coding rule is not especially limited, can be set with reference to being actually needed.
It is further preferred that splicing step S2 is specifically, according to the ID of human body feature point to the X-Y scheme after multiple marks The two dimensional image travel direction after multiple marks is spliced as carrying out position splicing, and according to the direction of human body feature point.
In the present embodiment, the direction of human body feature point, each artis is in the direction of right-handed coordinate system when being with person upright For the inceptive direction of each artis, in human motion, front direction is worked as in right-handed coordinate system with each artis in motion process For the real-time direction of the artis.In the present embodiment, setting the direction of artis can ensure in two dimension splicing and virtual suture During be not in adjacent regions suture mistake situation (when suture, except human body feature point is alignd, will also The direction alignment of corresponding human body feature point).Furthermore, it is necessary to explanation, in this implementation, the selection of origin is relative (thing The position of body is relative), peak, minimum point, central point are typically chosen as origin in industry, it is corresponding to may be considered The crown of human body, heel, navel, the position of origin can be selected according to being actually needed, and the present embodiment is not especially limited.
It is further preferred that stitching step S3 is specifically, using the common edge between human body feature point and human body feature point, with And spliced image is sutured according to triangle gridding suture way using virtual suturing skill.
In the present embodiment, using virtual portrait multiple two dimensional images and build in advance corresponding with virtual portrait three-dimensional When model generates the three-dimensional image of virtual portrait, multiple two dimensional images are spliced according to the ID and direction of human body feature point, And after splicing, the common edge between corresponding points is sutured using virtual suturing skill according to the method for triangle gridding, so that will Image mosaic is adapted to together, and then by the image after suture and threedimensional model, i.e. according to threedimensional model setting Parameter and feature are adjusted to figure, then obtain the three-dimensional image of virtual portrait.
It should be noted that splicing be not especially limited using quantity the present embodiment of two dimensional image.And usually, institute The effect that the more sutures of two dimensional image used form can be more accurate.
It is further preferred that threedimensional model is corresponding with the first personage.
In the present embodiment, to a specific virtual portrait (for example, some Quadratic Finite Element is vivid), because it is entirely artificial Create and design, the three-dimensional parameter and feature of this virtual portrait are all readily available.In addition, manikin exists at present The fields such as animation, game design are more ripe and are widely used, it is only necessary to set virtual image creator for virtual image Fixed three-dimensional parameter just can easily obtain human 3d model corresponding to the virtual image as input.
Furthermore, it is necessary to illustrate, in the present embodiment, the first personage addressed is virtual portrait.And need to illustrate , between three-dimensional reconstruction for real person is, it is necessary to detect and/or estimate in advance position and each characteristic point of characteristic point Corresponding relation.And in the case of known to the corresponding relation between the characteristic point and each characteristic point of real person, also it can be used The method of the present embodiment reconstructs to the three-dimensional image of real person.
Embodiment two
With reference to Fig. 2, the present embodiment provides a kind of three-dimensionalreconstruction device, including,
Unit 1 is marked, for entering using human body feature point set in advance at least two two dimensional images of the first personage Rower is noted;
Concatenation unit 2, for being spliced according to human body feature point to multiple two dimensional images after mark;
Stapling unit 3, for being carried out using the common edge between human body feature point and human body feature point to spliced image Suture;
Reconfiguration unit 4, for the image after suture to be adapted to the threedimensional model of the first personage built in advance, with Obtain the three-dimensional image of the first personage.
Three-dimensionalreconstruction device provided in an embodiment of the present invention, multiple two dimensional images of virtual portrait are labeled, spliced, Suture, reconstruct, to obtain the three-dimensional image of virtual portrait, then realize and three-dimensional display is carried out to virtual portrait.
Preferably, human body feature point has 21.
In the present embodiment, in order to generate virtual portrait three-dimensional appearance, using default human body feature point respectively to each two Dimension image is labeled, afterwards when splicing to two dimensional image, using the human body feature point on each image as splicing Index point (corresponding points), then realizes and image is accurately spliced.
It should be noted that animation, field of play in order to ensure virtual portrait action fluency, it is attractive in appearance, design Convenience etc., can be according to physiology, the research of medical science, and the real needs made with reference to human body animation select human body feature point.This A little human body feature points are preset in advance (not arrived using Real time identification the methods of motion detection, machine learning), general next Say, the optic centre points and crown heel etc. such as artis, the eye nose larynx of human body can be selected to align key point as default human body Characteristic point.And it is preferable, in the present embodiment, default human body feature point is 21, and position corresponding to this 21 human body feature points Put respectively:Left heel, right crus of diaphragm with, left knee, right knee, left waist, right waist, low back, belly, the left centre of the palm, the right centre of the palm, the left palm The back of the body, the right palm back of the body, left hand elbow, right hand elbow, left shoulder, right shoulder, left ear, auris dextra, nose, hindbrain, the crown.Furthermore, it is necessary to explanation, The present embodiment is not especially limited to the specific number and particular location of human body feature point, can be set with reference to being actually needed It is fixed.
In addition, when being labeled using human body feature point to two dimensional image, marked content includes the ID of human body feature point, And direction of the human body feature point in two dimensional image.
It should be noted that the ID of human body feature point sets for each human body feature point of unique mark.Typically For, it is not necessary to pay special attention to the order and coding rule of ID numberings.But need to upload in network as information when being related to ID It is defeated or when be related to coding work, in order to ensure the validity of information transfer and save bandwidth and next code workload, Can be by setting ID coding rules.For example, left back waist can set ID as WBL (waist back left), can also set ID is WB_001 (wherein, WB is waist back abbreviation, and 001 represents left, and 002 represents right, during 000 represents), the present embodiment Coding rule is not especially limited, can be set with reference to being actually needed.
It is further preferred that concatenation unit 2 is specifically used for, according to the ID of human body feature point to the X-Y scheme after multiple marks The two dimensional image travel direction after multiple marks is spliced as carrying out position splicing, and according to the direction of human body feature point.
In the present embodiment, the direction of human body feature point, each artis is in the direction of right-handed coordinate system when being with person upright For the inceptive direction of each artis, in human motion, front direction is worked as in right-handed coordinate system with each artis in motion process For the real-time direction of the artis.In the present embodiment, setting the direction of artis can ensure in two dimension splicing and virtual suture During be not in adjacent regions suture mistake situation (when suture, except human body feature point is alignd, will also The direction alignment of corresponding human body feature point).Furthermore, it is necessary to explanation, in this implementation, the selection of origin is relative (thing The position of body is relative), peak, minimum point, central point are typically chosen as origin in industry, it is corresponding to may be considered The crown of human body, heel, navel, the position of origin can be selected according to being actually needed, and the present embodiment is not especially limited.
It is further preferred that stapling unit 3 is specifically used for, using the common edge between human body feature point and human body feature point, And spliced image is sutured according to triangle gridding suture way using virtual suturing skill.
In the present embodiment, using virtual portrait multiple two dimensional images and build in advance corresponding with virtual portrait three-dimensional When model generates the three-dimensional image of virtual portrait, multiple two dimensional images are spliced according to the ID and direction of human body feature point, And after splicing, the common edge between corresponding points is sutured using virtual suturing skill according to the method for triangle gridding, so that will Image mosaic is adapted to together, and then by the image after suture and threedimensional model, i.e. according to threedimensional model setting Parameter and feature are adjusted to figure, then obtain the three-dimensional image of virtual portrait.
It should be noted that splicing be not especially limited using quantity the present embodiment of two dimensional image.And usually, institute The effect that the more sutures of two dimensional image used form can be more accurate.
It is further preferred that threedimensional model is corresponding with the first personage.
In the present embodiment, to a specific virtual portrait (for example, some Quadratic Finite Element is vivid), because it is entirely artificial Create and design, the three-dimensional parameter and feature of this virtual portrait are all readily available.In addition, manikin exists at present The fields such as animation, game design are more ripe and are widely used, it is only necessary to set virtual image creator for virtual image Fixed three-dimensional parameter just can easily obtain human 3d model corresponding to the virtual image as input.
Embodiment three
With reference to Fig. 3, the present embodiment provides a kind of method of augmented reality, for what is built to embodiment one or embodiment two Three-dimensional image realizes augmented reality, including,
Target Acquisition step S10, obtain the second personage in frame of video;
Motion-captured step S20, catch the motion feature point of the second personage;
Track obtaining step S30, the motion of motion feature point in video is tracked, to obtain the motion of each motion feature point Track;
Track applying step S40, the movement locus of each motion feature point is defined as corresponding to key point in three-dimensional image Movement locus, and the movement locus between each two key point is estimated, to obtain the mass motion rail of three-dimensional image Mark, realize and the motion state of the second personage in video is applied to three-dimensional image.
The method of augmented reality provided in an embodiment of the present invention, the dynamic of personage in video is obtained using motion tracking technology Make, then realize the action of personage in virtual portrait reproducing video.It can realize and use virtual figure image in former video scene Existing figure image in substitution video, the interactive experience of novelty can be provided the user, and cost is relatively low.
It should be noted that the present embodiment can be in automatic identification to frame of video personage, then with the three of virtual portrait Dimension image is with original personage in identical action and position substitution video.
Specifically, the present embodiment obtains the second personage (that is, Yuan Youren in video start frame using human body tracking technology Thing);Then the motion feature point of the second personage is caught;Motion feature point in motion target tracking Technical Follow-Up video is used afterwards Motion, to obtain the movement locus of each motion feature point, finally, the movement locus got is applied to virtual portrait The key point of three-dimensional image, to obtain the movement locus of each key point, and preferably, each two is closed using Bezier Movement locus between key point is estimated.
In the present embodiment, motion feature point is general human joint pointses, the crown, the index point such as heel, and needs what is illustrated It is that in the present embodiment, the position of motion feature point and the position of human body feature point can be the same or different, and the present embodiment is not Make specific restriction, can combine to be actually needed and be selected.
It should be noted that key point is human body feature point corresponding with motion feature point position relationship in three-dimensional image.
Furthermore, it is necessary to explanation, corresponding due to by the movement locus of each motion feature point, being defined as in three-dimensional image After the movement locus of key point, equivalent to the movement locus for being only aware of three-dimensional image motion key point, and three-dimensional image is other Position how to move be still in it is unknown, therefore, by Bezier, estimate that positions between two motion key points should be What kind of motion is done in corresponding motion process, the precision of augmented reality can be improved.
Furthermore, it is necessary to explanation, is estimated the movement locus between each two key point using Bezier An only optimal technical scheme of the present embodiment, and can also be realized using other modes, for example, it is also possible to using confrontation Generation method obtains the movement locus between each two key point, and the present embodiment is not especially limited.
Example IV
With reference to Fig. 4, the present embodiment provides a kind of device of augmented reality, including,
Target Acquisition unit 10, for obtaining the second personage in frame of video;
Motion-captured unit 20, for catching the motion feature point of the second personage;
Track acquiring unit 30, for tracking the motion of motion feature point in video, to obtain each motion feature point Movement locus;
Track applying unit 40, for by the movement locus of each motion feature point, being defined as in three-dimensional image corresponding pass The movement locus of key point, and the movement locus between each two key point is estimated, transported with obtaining the overall of three-dimensional image Dynamic rail mark, realize and the motion state of the second personage in video is applied to three-dimensional image.
The device of augmented reality provided in an embodiment of the present invention, the dynamic of personage in video is obtained using motion tracking technology Make, then realize the action of personage in virtual portrait reproducing video.It can realize and use virtual figure image in former video scene Existing figure image in substitution video, the interactive experience of novelty can be provided the user, and cost is relatively low.
It should be noted that the present embodiment can be in automatic identification to frame of video personage, then with the three of virtual portrait Dimension image is with original personage in identical action and position substitution video.
Specifically, the present embodiment obtains the second personage (that is, Yuan Youren in video start frame using human body tracking technology Thing);Then the motion feature point of the second personage is caught;Motion feature point in motion target tracking Technical Follow-Up video is used afterwards Motion, to obtain the movement locus of each motion feature point, finally, the movement locus got is applied to virtual portrait The key point of three-dimensional image, to obtain the movement locus of each key point, and preferably, each two is closed using Bezier Movement locus between key point is estimated.
In the present embodiment, motion feature point is general human joint pointses, the crown, the index point such as heel, and needs what is illustrated It is that in the present embodiment, the position of motion feature point and the position of human body feature point can be the same or different, and the present embodiment is not Make specific restriction, can combine to be actually needed and be selected.
It should be noted that key point is human body feature point corresponding with motion feature point position relationship in three-dimensional image.
Furthermore, it is necessary to explanation, corresponding due to by the movement locus of each motion feature point, being defined as in three-dimensional image After the movement locus of key point, equivalent to the movement locus for being only aware of three-dimensional image motion key point, and three-dimensional image is other Position how to move be still in it is unknown, therefore, by Bezier, estimate that positions between two motion key points should be What kind of motion is done in corresponding motion process, the precision of augmented reality can be improved.
Furthermore, it is necessary to explanation, is estimated the movement locus between each two key point using Bezier An only optimal technical scheme of the present embodiment, and can also be realized using other modes, for example, it is also possible to using confrontation Generation method obtains the movement locus between each two key point, and the present embodiment is not especially limited.
Although present invention has been a certain degree of description, it will be apparent that, do not departing from the spirit and scope of the present invention Under the conditions of, the appropriate change of each condition can be carried out.It is appreciated that the invention is not restricted to the embodiment, and it is attributed to right It is required that scope, it includes the equivalent substitution of each factor.

Claims (10)

  1. A kind of 1. three-dimensional reconstruction method, it is characterised in that including,
    Annotation step, at least two two dimensional images of first personage are labeled using human body feature point set in advance;
    Splice step, the two dimensional image after multiple marks is spliced according to the human body feature point;
    Stitching step, spliced image is sutured using the common edge between human body feature point and human body feature point;
    Reconstruction step, the image after suture is adapted to the threedimensional model built in advance, to obtain first personage's Three-dimensional image.
  2. 2. three-dimensional reconstruction method according to claim 1, it is characterised in that the human body feature point has 21.
  3. 3. three-dimensional reconstruction method according to claim 2, it is characterised in that the splicing step is specifically, according to described The ID of human body feature point carries out position splicing, and the direction according to the human body feature point to the two dimensional image after multiple marks Two dimensional image travel direction after multiple marks is spliced.
  4. 4. three-dimensional reconstruction method according to claim 1, it is characterised in that the stitching step is specifically, using human body Common edge between characteristic point and human body feature point, and use virtual suturing skill according to triangle gridding suture way to the spelling Image after connecing is sutured.
  5. 5. three-dimensional reconstruction method according to claim 1, it is characterised in that the threedimensional model and first personage couple Should.
  6. A kind of 6. three-dimensionalreconstruction device, it is characterised in that including,
    Unit is marked, for entering rower at least two two dimensional images of the first personage using human body feature point set in advance Note;
    Concatenation unit, for being spliced according to the human body feature point to multiple two dimensional images after mark;
    Stapling unit, for being sutured using the common edge between human body feature point and human body feature point to spliced image;
    Reconfiguration unit, for the image after suture to be adapted to the threedimensional model of first personage built in advance, with Obtain the three-dimensional image of first personage.
  7. A kind of 7. method of augmented reality, for realizing that enhancing is existing to the three-dimensional image as any one of claim 1 to 6 It is real, it is characterised in that including,
    Target Acquisition step, obtain the second personage in frame of video;
    Motion-captured step, catch the motion feature point of second personage;
    Track obtaining step, the motion of motion feature point described in video is tracked, to obtain the fortune of each motion feature point Dynamic rail mark;
    Track applying step, the movement locus of each motion feature point is defined as corresponding to key point in the three-dimensional image Movement locus, and the movement locus between key point described in each two is estimated, to obtain the mass motion of three-dimensional image Track, realize and the motion state of the second personage in video is applied to the three-dimensional image.
  8. 8. the method for augmented reality according to claim 7, it is characterised in that using Bezier to described in each two Movement locus between key point is estimated.
  9. 9. the method for augmented reality according to claim 8, it is characterised in that the key point is in the three-dimensional image Human body feature point corresponding with the motion feature point position relationship.
  10. A kind of 10. device of augmented reality, it is characterised in that including,
    Target Acquisition unit, for obtaining the second personage in frame of video;
    Motion-captured unit, for catching the motion feature point of second personage;
    Track acquiring unit, for tracking the motion of the point of motion feature described in video, to obtain each motion feature point Movement locus;
    Track applying unit, it is corresponding crucial for by the movement locus of each motion feature point, being defined as in the three-dimensional image The movement locus of point, and the movement locus between key point described in each two is estimated, to obtain the entirety of three-dimensional image Movement locus, realize and the motion state of the second personage in video is applied to the three-dimensional image.
CN201711084000.4A 2017-11-07 2017-11-07 Three-dimensional reconstruction method and device, the method and device of augmented reality Pending CN107845129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711084000.4A CN107845129A (en) 2017-11-07 2017-11-07 Three-dimensional reconstruction method and device, the method and device of augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711084000.4A CN107845129A (en) 2017-11-07 2017-11-07 Three-dimensional reconstruction method and device, the method and device of augmented reality

Publications (1)

Publication Number Publication Date
CN107845129A true CN107845129A (en) 2018-03-27

Family

ID=61682497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711084000.4A Pending CN107845129A (en) 2017-11-07 2017-11-07 Three-dimensional reconstruction method and device, the method and device of augmented reality

Country Status (1)

Country Link
CN (1) CN107845129A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144252A (en) * 2018-08-01 2019-01-04 百度在线网络技术(北京)有限公司 Object determines method, apparatus, equipment and storage medium
CN109191593A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Motion control method, device and the equipment of virtual three-dimensional model
CN110245638A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Video generation method and device
CN111383313A (en) * 2020-03-31 2020-07-07 歌尔股份有限公司 Virtual model rendering method, device and equipment and readable storage medium
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium
WO2022241583A1 (en) * 2021-05-15 2022-11-24 电子科技大学 Family scenario motion capture method based on multi-target video

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520902A (en) * 2009-02-24 2009-09-02 上海大学 System and method for low cost motion capture and demonstration
CN101579238A (en) * 2009-06-15 2009-11-18 吴健康 Human motion capture three dimensional playback system and method thereof
US20110157306A1 (en) * 2009-12-29 2011-06-30 Industrial Technology Research Institute Animation Generation Systems And Methods
CN102929386A (en) * 2012-09-16 2013-02-13 吴东辉 Method and system of reproducing virtual reality dynamically
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN103218844A (en) * 2013-04-03 2013-07-24 腾讯科技(深圳)有限公司 Collocation method, implementation method, client side, server and system of virtual image
CN104156995A (en) * 2014-07-16 2014-11-19 浙江大学 Production method for ribbon animation aiming at Dunhuang flying image
CN105427369A (en) * 2015-11-25 2016-03-23 努比亚技术有限公司 Mobile terminal and method for generating three-dimensional image of mobile terminal
CN106297442A (en) * 2016-10-27 2017-01-04 深圳市成真教育科技有限公司 A kind of body-sensing mutual education realization method and system
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN107095393A (en) * 2017-03-22 2017-08-29 青岛小步科技有限公司 A kind of customization footwear preparation method and system based on image recognition and dimensional Modeling Technology

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520902A (en) * 2009-02-24 2009-09-02 上海大学 System and method for low cost motion capture and demonstration
CN101579238A (en) * 2009-06-15 2009-11-18 吴健康 Human motion capture three dimensional playback system and method thereof
US20110157306A1 (en) * 2009-12-29 2011-06-30 Industrial Technology Research Institute Animation Generation Systems And Methods
CN102929386A (en) * 2012-09-16 2013-02-13 吴东辉 Method and system of reproducing virtual reality dynamically
CN103049896A (en) * 2012-12-27 2013-04-17 浙江大学 Automatic registration algorithm for geometric data and texture data of three-dimensional model
CN103218844A (en) * 2013-04-03 2013-07-24 腾讯科技(深圳)有限公司 Collocation method, implementation method, client side, server and system of virtual image
CN104156995A (en) * 2014-07-16 2014-11-19 浙江大学 Production method for ribbon animation aiming at Dunhuang flying image
CN105427369A (en) * 2015-11-25 2016-03-23 努比亚技术有限公司 Mobile terminal and method for generating three-dimensional image of mobile terminal
CN106502388A (en) * 2016-09-26 2017-03-15 惠州Tcl移动通信有限公司 A kind of interactive movement technique and head-wearing type intelligent equipment
CN106297442A (en) * 2016-10-27 2017-01-04 深圳市成真教育科技有限公司 A kind of body-sensing mutual education realization method and system
CN107095393A (en) * 2017-03-22 2017-08-29 青岛小步科技有限公司 A kind of customization footwear preparation method and system based on image recognition and dimensional Modeling Technology

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144252A (en) * 2018-08-01 2019-01-04 百度在线网络技术(北京)有限公司 Object determines method, apparatus, equipment and storage medium
US11042730B2 (en) 2018-08-01 2021-06-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and device for determining an object, and storage medium for the same
CN109191593A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Motion control method, device and the equipment of virtual three-dimensional model
CN110245638A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Video generation method and device
CN111383313A (en) * 2020-03-31 2020-07-07 歌尔股份有限公司 Virtual model rendering method, device and equipment and readable storage medium
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium
CN114466202B (en) * 2020-11-06 2023-12-12 中移物联网有限公司 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium
WO2022241583A1 (en) * 2021-05-15 2022-11-24 电子科技大学 Family scenario motion capture method based on multi-target video

Similar Documents

Publication Publication Date Title
CN107845129A (en) Three-dimensional reconstruction method and device, the method and device of augmented reality
CN110245638A (en) Video generation method and device
CN107231531A (en) A kind of networks VR technology and real scene shooting combination production of film and TV system
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN107343225B (en) The method, apparatus and terminal device of business object are shown in video image
WO2011045768A2 (en) Animation of photo-images via fitting of combined models
CN105959814B (en) Video barrage display methods based on scene Recognition and its display device
CN110766776A (en) Method and device for generating expression animation
Ping et al. Computer facial animation: A review
CN112184886B (en) Image processing method, device, computer equipment and storage medium
CN112734946A (en) Vocal music performance teaching method and system
Jin et al. GAN-based pencil drawing learning system for art education on large-scale image datasets with learning analytics
Wang et al. A survey of museum applied research based on mobile augmented reality
CN110298345A (en) A kind of area-of-interest automatic marking method of medical images data sets
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
Tang Application and design of drama popular science education using augmented reality
Chen et al. Research on augmented reality system for childhood education reading
Yeo The theory of process augmentability
Zeng et al. Highly fluent sign language synthesis based on variable motion frame interpolation
Li Construction and simulation on intelligent medical equipment system based on virtual reality technology and human-computer interaction model
Wang Using Artificial Intelligence to Improve Camera’s Recognition Function on Mobile Phone
Hou et al. Real-time markerless facial motion capture of personalized 3D real human research
Brownridge Real-time motion capture for analysis and presentation within virtual environments
Volpato Batista OtoVIS: A Photorealistic Virtual Reality Environment for Visualizing the Anatomical Structures of the Ear and Temporal Bone
DIOGO INTEGRATING 3D OBJECTS AND POSE ESTIMATION FOR MULTIMODAL VIDEO ANNOTATIONS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180327