CN101271581A - Establishing personalized three-dimensional mannequin - Google Patents
Establishing personalized three-dimensional mannequin Download PDFInfo
- Publication number
- CN101271581A CN101271581A CNA2008100613876A CN200810061387A CN101271581A CN 101271581 A CN101271581 A CN 101271581A CN A2008100613876 A CNA2008100613876 A CN A2008100613876A CN 200810061387 A CN200810061387 A CN 200810061387A CN 101271581 A CN101271581 A CN 101271581A
- Authority
- CN
- China
- Prior art keywords
- line
- cut
- point
- dimensional
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 45
- 230000004048 modification Effects 0.000 claims abstract description 7
- 238000012986 modification Methods 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 11
- 239000000284 extract Substances 0.000 claims description 10
- 210000002414 leg Anatomy 0.000 claims description 7
- 210000000245 forearm Anatomy 0.000 claims description 6
- 210000000689 upper leg Anatomy 0.000 claims description 6
- 206010011469 Crying Diseases 0.000 claims description 3
- 210000001699 lower leg Anatomy 0.000 claims description 3
- 210000004197 pelvis Anatomy 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 abstract description 6
- 238000012549 training Methods 0.000 abstract description 3
- 230000008901 benefit Effects 0.000 abstract description 2
- 230000015572 biosynthetic process Effects 0.000 abstract 1
- 238000010276 construction Methods 0.000 abstract 1
- 230000007774 longterm Effects 0.000 abstract 1
- 238000003786 synthesis reaction Methods 0.000 abstract 1
- 239000011159 matrix material Substances 0.000 description 9
- 238000000605 extraction Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 238000012559 user support system Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method establishing an individualized three-dimensional human model. The invention comprises the following steps: 1) a shooting of a photo and a synthesis of a reference image; 2) a segmentation of a contour line; 3) a construction of a two-dimensional mapping; 4) a modification of a position of a three-dimensional point; 5) the composition of the three-dimensional point. The invention needs neither an expensive software support or a hardware support nor a long term training for a user; only a front photo and a side photo for the modeling of a man are needed, by which an appearance characteristic drawn from the photo can be embedded into a reference model built in by an algorithm to produce a realistic, articulated and individualized human model. The method establishing the individualized three-dimensional human model has the advantages that the hardware cost is low; the special or exclusive hardware is not needed; the software cost is low; the operation is easy; a usage method is easy to be mastered without the training, a split joint of the appearance of the built model is good; and the built model is provided with a built-in skeleton structure, thereby being convenient for making a cartoon.
Description
Technical field
The present invention relates to computing machine human body field of three-dimension modeling (big technical field), particularly relate to a kind of personalized three-dimensional (3 D) manikin of setting up.
Background technology
Visual human's model has application demand widely.Building a large amount of visual human's models simply and easily as how less economy and human input has great importance.
The method for expressing of manikin comprises excellent model, phantom type, surface model and multilayered model etc.Wherein surface model, especially polygonal mesh, owing to calculate simply, ability to express is strong, is the method for expressing of main flow and the de facto standard of industry.In order to set up the human body surface model, researchers have proposed diversified algorithm,, roughly can be divided into creative, trap-type, interpolation type, match type and parametric method totally 5 big classes.
In first kind human body modeling method, the user creates the model of human body by alternatively operating the modelling element of bottom with growing out of nothing.Though this method has been given the control of user's maximum, to user's artistic natural gift and use the skill level of this modeling software that very high requirement is all arranged.
Second class is popular in recent years trap-type method.The data acquisition of these class methods needs special capture device, does not wait to the camera of calibrating from the spatial digitizer of costliness.The advantage of these class methods is that the accuracy of results model is than higher.But there is not structural information in the results model that generates.Utilizing it to finish animation must handle through numerous and diverse assembling (rigging).
The 3rd class is the data driven type method.This method needs a large amount of model samples to support, and assembles by interpolation or after cutting apart then again and obtains new model.Obviously good model bank is vital to the driving modeling method of data, needs a large amount of time and resource but set up such storehouse.
The 4th class be based on model reconstructing method its utilize the resemblance from individual photo, extract to revise universal model, make its external appearance characteristic, and keep original structural information with people in the photo.These class methods require lowly, simple to operate to equipment, and results model has visual effect preferably, and it is also very convenient to finish animation.
Creative method needs the user to have higher artistic natural gift and skilled software using skill, and also more complicated of the process of creating a model, needs long time and efforts.The trap-type method needs expensive hardware device, and does not possess the framework information of supporting animation in the model that generates.Model driven method needs a large amount of outstanding model samples to move, and is now just lacking such model bank, and building one needs a large amount of time and resource again.Designed capacity and a large amount of reference model that parametric method relies on the user support.These methods all can not satisfy the requirement of modeling in enormous quantities.
Summary of the invention
The object of the present invention is to provide a kind of method of setting up personalized three-dimensional (3 D) manikin.
Setting up personalized three-dimensional (3 D) manikin method comprises the steps:
1) shooting of photo and reference picture is synthetic
The people stands in from the place of 2~4 meters in camera, and both arms lift, and takes positive, side two photos, and the parameter identical with taking camera be with the image of virtual camera synthesized reference model, extracts the outline line of human body from photo and synthetic reference picture;
2) outline line is cut apart
Cut apart the outline line that from full face, side photo, front reference picture, side reference picture, extracts and obtain sub-outline line according to head, left upper arm, left forearm, left hand, right upper arm, right forearm, the right hand, last trunk, pelvis, left thigh, left leg, left foot, right thigh, right leg, right crus of diaphragm, corresponding to crying with reference to sub-outline line of reference picture, cry target sub-outline line corresponding to photo;
3) structure of two-dimensional map
Use straight line compactly to cut with reference to sub-outline line and the sub-outline line of target perpendicular to main shaft, the line of cut that forms is carried out parametrization, define a line of cut space, point in sub-outline line of the reference of same area and the sub-outline line of target is carried out parametrization respectively, the point of identical parameter value forms two-dimensional map, through such processing, just can in the line of cut space, construct from the two-dimensional map of any any in the target image in the reference picture;
4) modification of three-dimensional point position
By the identical position cutting reference model of cutting reference picture, obtain 15 sub-reference models, each three-dimensional point in the bundle reference model projects to a pixel of two-dimensional space, be called reference pixel, this pixel must be positioned at reference to sub-outline line, utilizes two-dimensional map that previous step obtains the pixel of this pixel mapping in the sub-outline line of target, is called object pixel, object pixel back projection to three dimensions, is generated the reposition of three-dimensional point;
5) three-dimensional point is synthetic
Respectively reference model is projected to front and side, carry out the modification of three-dimensional point position, positive height and the width information that obtains human body of handling, height and the thickness information that obtains human body handled in the side, and combination obtains complete three-dimensional model.Array mode is, the width information of results model is handled from the front and got, and the thickness information of results model is handled from the side and got, and the elevation information of results model is handled by the front and the mean value of side processing gets.
Describedly use the straight line perpendicular to main shaft compactly to cut with reference to sub-outline line and the sub-outline line of target, the line of cut that forms is carried out parametrization, define a line of cut space step: outline line is divided into the oriented border of left and right sides two halves, the line of border starting point is called initial line of cut, the line of border terminal point is called the termination line of cut, the straight line that connects initial line of cut mid point and termination line of cut mid point is on main shaft, point in the border, the left and right sides all is one to one, the line of corresponding point is called line of cut, then any point, space is determined by two parameters, promptly puts relative position and point the relative position in line of cut of line of cut in all lines of cut at place.
The present invention neither needs expensive soft, hardware supported, do not need the user is trained for a long time yet, only need given people's front photo, it just can be embedded into the resemblance of extracting in the photo in the built-in reference model of algorithm, generates realistic, the jointization and manikin personalization.Hardware costs hangs down does not need special or proprietary hardware; Software overhead is low simple to operate, need not training and can grasp using method; The mode shape splicing of setting up is good; The model of setting up has built-in skeleton structure, conveniently is used for making animation.
Description of drawings
Fig. 1 (a) is a parametrization synoptic diagram of the present invention,
Fig. 1 (b) is that adjustment line of cut of the present invention is towards synoptic diagram.
Embodiment
Setting up personalized three-dimensional (3 D) manikin method comprises the steps:
Synthesizing of the shooting of 1 photo and reference picture
1.1 take pictures
Resemblance in the newly-generated manikin extracts from true man's photo.Take true man's photo and should be noted that following item.The people stands in from the place of 2~4 meters in camera, and both arms lift; Photo must be taken from positive and side same individual.Take this man-hour, he can not wear too loose fitting clothes.People in the photo must be enough big, to give particulars.
1.2 synthetic standards model image
With the parameter identical projection matrix, model transferring matrix, viewpoint change matrix and the viewport transform matrix of virtual camera are set, utilize the off line of OpenGL to draw function then, draw reference model and obtain reference picture with the parameter of taking pictures.
1.3 from standard picture, extract outline line
Can do rational setting during composograph and make background color and foreground color that tangible difference be arranged, so just can utilize automatic boundary extraction algorithm to extract the border.Concrete algorithm is at first image to be carried out binaryzation, is connected the processing of 3 steps then successively with the border through the extraction of trimming circle, deburring, i.e. the profile that can obtain being linked in sequence.
We utilize the outline of morphology (morph) the operator extraction human body of image boundary extraction.As can be seen from the figure, from a sub-picture, deduct and carried out once this image of corrosion operation, can obtain the outer boundary of object in the image.
There are some burrs in the deburring bianry image.Its reason has two: when generating standard picture, because the precision of scan conversion is not enough; The error that causes during binaryzation.This burr connects for follow-up border and cause difficulty.Therefore, we utilize structural element image is carried out thin to operate and remove these burrs for removing the end points element.
It is input that the border connects the image of forming with the boundary element that obtains previously, coordinate figure sequence with the boundary element arranged by the order of connection is output, the basic thought of attended operation is from a boundary element, seek a nearest untreated boundary element, and coordinate is serially connected in the last of sequence.With the boundary element that newly the obtains repetition said process that sets out, can obtain the result that the border connects then.
1.4 from photo, extract outline line
The task in this stage is the outline line from photo or composograph the inside extraction human body.If the background of photo is fairly simple,, then can utilize the algorithm in 1.3 such as being lan settings.If the background more complicated then utilizes the auxiliary hand drawing of program to extract outline line.
Automatically extracting the body contour line has two kinds of automatic scheme available.Scheme one is a method of utilizing rim detection to be connected with the edge.Rim detection is a problem classical in the Computer Image Processing, is divided into the zero cross point method, statistics type method of the max methods that detects gradient, detection second derivative and based on the multiple dimensioned method of small echo.Yet, also there are other lines in background and inside of human body owing in general photo (except the photo of above-mentioned simple prospect background), remove outside the body contour line.And on signal intensity, body contour line's ratio not other lines obviously is dominant.Because it is existing that this reason, algorithm can't be found out human body contour outline automatically.
Scheme two is to adopt the method for movable contour model.The movable contour model method is from an initial curve, move and finally stop at the object edge that will seek along the direction that target energy is successively decreased near.Target energy comprises the external energy two parts by the internal energy of profile unique characteristics decision and characteristics of image decision.Under the application background of my volume modeling method, the net result of movable contour model significantly depends on the selection of initial boundary.It also is impracticable adopting the movable contour model method to reach full-automatic extraction body contour line.
The method that adopts the program indirect labor to instruct in this case is more suitable.By the automatic calculating of program, only need an artificial guidance seldom just can extract human body contour outline under the more clearly situation in this method border in image.Under the situation of obscurity boundary even disappearance, also can rely on mankind itself's vision free from worldly cares to delineate out required border by hand by the operator.This method is presented at photo in the window as a setting, the mode of utilizing mouse to click in window interior by the operator is specified the node in the outline line then, Automatic Program generated the partial contour line from a last node to mouse pointer position when mouse moved, and this outline line is subjected to the absorption on the border in the image.Operator's appraisal procedure result calculated is confirmed this result under situation about seeing fit, and is that new starting point is calculated next section profile with the mouse current location.Figure has showed the situation of calculating one section profile.Generally speaking, the undesired signal in the image is few more, and the node that outline line needs is just few more, and operator's workload is just more little.This method directly generates the sequence of point coordinate on the outline line, need not to do the operation that the edge connects again.
2 outline lines are cut apart
Cut apart the outline line that from full face, side photo, front reference picture, side reference picture, extracts and obtain sub-outline line according to head, left upper arm, left forearm, left hand, right upper arm, right forearm, the right hand, last trunk, pelvis, left thigh, left leg, left foot, right thigh, right leg, right crus of diaphragm, corresponding to crying with reference to sub-outline line of reference picture, cry target sub-outline line corresponding to photo;
2.1 cut apart three-dimensional model
To finishing automatically with reference to the split position of two dimensional image cutting apart of three-dimensional model by program.Concrete way is that the line of cut of two dimension is done projection along direction of visual lines, the expansion cut surface.The segmentation result of standard picture and master pattern is as figure.
The structure of 3 two-dimensional map
Use straight line compactly to cut with reference to sub-outline line and the sub-outline line of target perpendicular to main shaft, the line of cut that forms is carried out parametrization, define a line of cut space, point in sub-outline line of the reference of same area and the sub-outline line of target is carried out parametrization respectively, the point of identical parameter value forms two-dimensional map, through such processing, just can in the line of cut space, construct from the two-dimensional map of any any in the target image in the reference picture;
Two-dimensional map algorithm based on the line of cut space representation is mapped to an only point that is positioned at characteristic outline line inside to any point that is positioned in the nominal contour line of giving, and this algorithm is the basis of inhomogeneous deformation.Below we tell about the most how to construct this two-dimensional map earlier, at actual conditions concrete implementation method is proposed then.
3.1 the definition in line of cut space and parametrization
Consider the basic condition shown in Fig. 1 (a) first: outline line is divided into the oriented border of left and right sides two halves; The flat shape of the line of border starting point (initial line of cut) is in the line (termination line of cut) of border terminal point, and perpendicular to the straight line (main shaft) of the mid point of the mid point that connects two starting points and two terminal points; In the border, the left and right sides each all is one to one to point.We are called line of cut to the line of corresponding point.Any point, space can be determined by following two parameters: relative position u and point the relative position v in line of cut of the line of cut at some place in line of cut bunch.
With boundary definition is the intersection B=L ∪ R on two halves border, the left and right sides, and wherein L and R are the set of point, and the coordinate of point is with (s t) represents.We can define two auxiliary functions is l (t) and r (t), the function below realizing respectively: a given integer t, find corresponding s value on the border, its left side (or right).If the Cartesian coordinates of any point of inside, border be (s, t), the line of cut volume coordinate be (u v), then has:
With
3.2 the processing of inclination main shaft
In fact the direction of extension that is not all segmentations all is vertical, such as arm and leg.Mid point with initial line of cut is that center angle of rotation can make main shaft return to vertical situation.If the rotation before and after coordinate be respectively (x, y) and (x, y), rotation center be (cx, cy), rotation angle is θ, then the formula of the positive inverse transformation between them is respectively:
With:
3.3 the processing of attitude difference
Human body in the universal model generally is relaxed state, and at this moment the borderline point of adjacent sectional spatially overlaps.The group segmentation is after the junction rotates an angle, and traditional way is the point that only rotates in the segmentation of lower end, and processing can make the line of cut of boundary produce intersection, the continuity on damage model surface like this.This paper adjust with a rotation angle that seamlessly transits this circle, two segmentations the inside line of cut towards.Near more from the border, the anglec of rotation is just big more, otherwise value far away more is more little, decays to zero (as Fig. 1 (b)) behind certain distance.Parameter before and after supposing to adjust be respectively (u, v) and (a, b), then two mapping definitions on the both forward and reverse directions are as follows:
W in the formula (u) is the weights functions, is defined as:
The modification of 4 three-dimensional point positions
By the identical position cutting reference model of cutting reference picture, obtain 15 sub-reference models, each three-dimensional point in the bundle reference model projects to a pixel of two-dimensional space, be called reference pixel, this pixel must be positioned at reference to sub-outline line, utilizes two-dimensional map that previous step obtains the pixel of this pixel mapping in the sub-outline line of target, is called object pixel, object pixel back projection to three dimensions, is generated the reposition of three-dimensional point;
Two-dimensional points and three-dimensional point can connect by projection matrix, model-view matrix and viewing matrix.
R is a rotation matrix in the formula, and t is a translation matrix, and f is the focal length of camera, and (x, y, 1) is the homogeneous coordinates of two-dimensional points, and (X, Y, Z, 1) is the homogeneous coordinates of three-dimensional point.
Tool function gluProject and gluUnproject that function library GLU provides can realize top conversion.To a point in the general three-dimensional model (X, Y, Z), projection obtain two-dimensional coordinate (x, y) and depth value z, utilize two-dimensional map find (x, corresponding point y) (x ', y ') utilize then (x ', y ', z) back projection obtains new position.Method with Hilton
[7]The same, this paper also adopts the depth value of universal model mid point to replace the depth value of new model mid point approx.Usually the distance of people and camera approximately is 3m, and the difference of the coordinate of standard human body and specific human body mid point has only several centimetres, is rational so this is similar to.
Synthesizing of 5 three-dimensional point
Respectively reference model is projected to front and side, carry out the modification of three-dimensional point position, positive height and the width information that obtains human body of handling, height and the thickness information that obtains human body handled in the side, and combination obtains complete three-dimensional model.Array mode is, the width information of results model is handled from the front and got, and the thickness information of results model is handled from the side and got, and the elevation information of results model is handled by the front and the mean value of side processing gets.
Claims (2)
1. set up personalized three-dimensional (3 D) manikin method for one kind, it is characterized in that comprising the steps:
1) shooting of photo and reference picture is synthetic
The people stands in from the place of 2~4 meters in camera, and both arms lift, and takes positive, side two photos, and the parameter identical with taking camera be with the image of virtual camera synthesized reference model, extracts the outline line of human body from photo and synthetic reference picture;
2) outline line is cut apart
Cut apart the outline line that from full face, side photo, front reference picture, side reference picture, extracts and obtain sub-outline line according to head, left upper arm, left forearm, left hand, right upper arm, right forearm, the right hand, last trunk, pelvis, left thigh, left leg, left foot, right thigh, right leg, right crus of diaphragm, corresponding to crying with reference to sub-outline line of reference picture, cry target sub-outline line corresponding to photo;
3) structure of two-dimensional map
Use straight line compactly to cut with reference to sub-outline line and the sub-outline line of target perpendicular to main shaft, the line of cut that forms is carried out parametrization, define a line of cut space, point in sub-outline line of the reference of same area and the sub-outline line of target is carried out parametrization respectively, the point of identical parameter value forms two-dimensional map, through such processing, just can in the line of cut space, construct from the two-dimensional map of any any in the target image in the reference picture;
4) modification of three-dimensional point position
By the identical position cutting reference model of cutting reference picture, obtain 15 sub-reference models, each three-dimensional point in the bundle reference model projects to a pixel of two-dimensional space, be called reference pixel, this pixel must be positioned at reference to sub-outline line, utilizes two-dimensional map that previous step obtains the pixel of this pixel mapping in the sub-outline line of target, is called object pixel, object pixel back projection to three dimensions, is generated the reposition of three-dimensional point;
5) three-dimensional point is synthetic
Respectively reference model is projected to front and side, carry out the modification of three-dimensional point position, positive height and the width information that obtains human body of handling, height and the thickness information that obtains human body handled in the side, and combination obtains complete three-dimensional model.Array mode is, the width information of results model is handled from the front and got, and the thickness information of results model is handled from the side and got, and the elevation information of results model is handled by the front and the mean value of side processing gets.
2. a kind of personalized three-dimensional (3 D) manikin method of setting up according to claim 1, it is characterized in that describedly using the straight line perpendicular to main shaft compactly to cut with reference to sub-outline line and the sub-outline line of target, the line of cut that forms is carried out parametrization, define a line of cut space step: outline line is divided into the oriented border of left and right sides two halves, the line of border starting point is called initial line of cut, the line of border terminal point is called the termination line of cut, the straight line that connects initial line of cut mid point and termination line of cut mid point is on main shaft, point in the border, the left and right sides all is one to one, the line of corresponding point is called line of cut, then any point, space is determined by two parameters, promptly puts relative position and point the relative position in line of cut of line of cut in all lines of cut at place.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008100613876A CN101271581A (en) | 2008-04-25 | 2008-04-25 | Establishing personalized three-dimensional mannequin |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008100613876A CN101271581A (en) | 2008-04-25 | 2008-04-25 | Establishing personalized three-dimensional mannequin |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101271581A true CN101271581A (en) | 2008-09-24 |
Family
ID=40005532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2008100613876A Pending CN101271581A (en) | 2008-04-25 | 2008-04-25 | Establishing personalized three-dimensional mannequin |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101271581A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101814196A (en) * | 2010-03-09 | 2010-08-25 | 浙江大学 | Method for designing three-dimensional cartoon toys based on pictures |
CN103778649A (en) * | 2012-10-11 | 2014-05-07 | 通用汽车环球科技运作有限责任公司 | Imaging surface modeling for camera modeling and virtual view synthesis |
RU2530334C2 (en) * | 2009-01-30 | 2014-10-10 | Майкрософт Корпорейшн | Target visual tracking |
CN104102343A (en) * | 2013-04-12 | 2014-10-15 | 何安莉 | Interactive Input System And Method |
CN104992444A (en) * | 2015-07-14 | 2015-10-21 | 山东易创电子有限公司 | Human tomographic image cutting method and system |
CN105336000A (en) * | 2015-12-09 | 2016-02-17 | 新疆华德软件科技有限公司 | Virtual human limb modeling method based on hyperboloids of revolution |
CN105631938A (en) * | 2015-12-29 | 2016-06-01 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN106355610A (en) * | 2016-08-31 | 2017-01-25 | 杭州远舟医疗科技有限公司 | Three-dimensional human body surface reconstruction method and device |
CN107343148A (en) * | 2017-07-31 | 2017-11-10 | 广东欧珀移动通信有限公司 | Image completion method, apparatus and terminal |
US9842405B2 (en) | 2009-01-30 | 2017-12-12 | Microsoft Technology Licensing, Llc | Visual target tracking |
CN108492299A (en) * | 2018-03-06 | 2018-09-04 | 天津天堰科技股份有限公司 | A kind of cutting method of 3-D view |
CN110189408A (en) * | 2019-06-04 | 2019-08-30 | 西安科技大学 | It is a kind of that the system and method for human body appearance data is obtained according to human body photo |
CN117350965A (en) * | 2023-10-07 | 2024-01-05 | 中国原子能科学研究院 | Index pre-estimating device for radioactive microsphere in object |
-
2008
- 2008-04-25 CN CNA2008100613876A patent/CN101271581A/en active Pending
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2530334C2 (en) * | 2009-01-30 | 2014-10-10 | Майкрософт Корпорейшн | Target visual tracking |
US9039528B2 (en) | 2009-01-30 | 2015-05-26 | Microsoft Technology Licensing, Llc | Visual target tracking |
US9842405B2 (en) | 2009-01-30 | 2017-12-12 | Microsoft Technology Licensing, Llc | Visual target tracking |
CN101814196A (en) * | 2010-03-09 | 2010-08-25 | 浙江大学 | Method for designing three-dimensional cartoon toys based on pictures |
CN103778649A (en) * | 2012-10-11 | 2014-05-07 | 通用汽车环球科技运作有限责任公司 | Imaging surface modeling for camera modeling and virtual view synthesis |
CN103778649B (en) * | 2012-10-11 | 2018-08-31 | 通用汽车环球科技运作有限责任公司 | Imaging surface modeling for camera modeling and virtual view synthesis |
CN104102343B (en) * | 2013-04-12 | 2019-03-01 | 杭州凌感科技有限公司 | Interactive input system and method |
CN104102343A (en) * | 2013-04-12 | 2014-10-15 | 何安莉 | Interactive Input System And Method |
US10203765B2 (en) | 2013-04-12 | 2019-02-12 | Usens, Inc. | Interactive input system and method |
CN104992444B (en) * | 2015-07-14 | 2018-09-21 | 山东易创电子有限公司 | A kind of cutting method and system of human body layer data |
CN104992444A (en) * | 2015-07-14 | 2015-10-21 | 山东易创电子有限公司 | Human tomographic image cutting method and system |
CN105336000A (en) * | 2015-12-09 | 2016-02-17 | 新疆华德软件科技有限公司 | Virtual human limb modeling method based on hyperboloids of revolution |
CN105631938A (en) * | 2015-12-29 | 2016-06-01 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN106355610A (en) * | 2016-08-31 | 2017-01-25 | 杭州远舟医疗科技有限公司 | Three-dimensional human body surface reconstruction method and device |
CN107343148A (en) * | 2017-07-31 | 2017-11-10 | 广东欧珀移动通信有限公司 | Image completion method, apparatus and terminal |
CN107343148B (en) * | 2017-07-31 | 2019-06-21 | Oppo广东移动通信有限公司 | Image completion method, apparatus and terminal |
CN108492299A (en) * | 2018-03-06 | 2018-09-04 | 天津天堰科技股份有限公司 | A kind of cutting method of 3-D view |
CN110189408A (en) * | 2019-06-04 | 2019-08-30 | 西安科技大学 | It is a kind of that the system and method for human body appearance data is obtained according to human body photo |
CN117350965A (en) * | 2023-10-07 | 2024-01-05 | 中国原子能科学研究院 | Index pre-estimating device for radioactive microsphere in object |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101271581A (en) | Establishing personalized three-dimensional mannequin | |
CN109035388B (en) | Three-dimensional face model reconstruction method and device | |
WO2022121645A1 (en) | Method for generating sense of reality of virtual object in teaching scene | |
CN103473806B (en) | A kind of clothes 3 D model construction method based on single image | |
CN110223379A (en) | Three-dimensional point cloud method for reconstructing based on laser radar | |
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN102982578B (en) | Estimation method for dressed body 3D model in single character image | |
CN104376596B (en) | A kind of three-dimensional scene structure modeling and register method based on single image | |
JP4785880B2 (en) | System and method for 3D object recognition | |
Sinha et al. | Interactive 3D architectural modeling from unordered photo collections | |
CN103247075B (en) | Based on the indoor environment three-dimensional rebuilding method of variation mechanism | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
CN101404091B (en) | Three-dimensional human face reconstruction method and system based on two-step shape modeling | |
US20050140670A1 (en) | Photogrammetric reconstruction of free-form objects with curvilinear structures | |
CN102509338B (en) | Contour and skeleton diagram-based video scene behavior generation method | |
CN103646416A (en) | Three-dimensional cartoon face texture generation method and device | |
WO2007146069A2 (en) | A sketch-based design system, apparatus, and method for the construction and modification of three-dimensional geometry | |
GB2389500A (en) | Generating 3D body models from scanned data | |
JPH05342310A (en) | Method and device for three-dimensional conversion of linear element data | |
CN104966318A (en) | A reality augmenting method having image superposition and image special effect functions | |
CN104407521A (en) | Method for realizing real-time simulation of underwater robot | |
JP7475022B2 (en) | Method and device for generating 3D maps of indoor spaces | |
CN106127743B (en) | The method and system of automatic Reconstruction bidimensional image and threedimensional model accurate relative location | |
CN103700134A (en) | Three-dimensional vector model real-time shadow deferred shading method based on controllable texture baking | |
Verhoeven | Computer graphics meets image fusion: The power of texture baking to simultaneously visualise 3D surface features and colour |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080924 |