CN108391113A - Data capture method based on 3D models and model generating means - Google Patents
Data capture method based on 3D models and model generating means Download PDFInfo
- Publication number
- CN108391113A CN108391113A CN201810158703.5A CN201810158703A CN108391113A CN 108391113 A CN108391113 A CN 108391113A CN 201810158703 A CN201810158703 A CN 201810158703A CN 108391113 A CN108391113 A CN 108391113A
- Authority
- CN
- China
- Prior art keywords
- models
- generating means
- data capture
- model generating
- capture method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of data capture method based on 3D models and model generating means, the data capture method includes:The model generating means obtain the first 3D models of a target user;The model generating means obtain the 2nd 3D models with the first 3D model corresponding positions of the target user;Obtain the values of disparity of the first the 3D models and the 2nd 3D models;Export the values of disparity.The data capture method based on 3D models and model generating means of the present invention can conveniently obtain the 3D models of user's whole body, and various data informations on 3D models can be obtained by obtaining, and the model of the front and back acquisition of user is compared to obtain the variation of user's build.
Description
Technical field
The present invention relates to a kind of data capture method based on 3D models and model generating means.
Background technology
3D video cameras, what is utilized is the video camera of 3D camera lenses manufacture, usually there are two tools more than pick-up lens, spacing and people
Eye spacing is close, can shoot the similar seen different images for being directed to Same Scene of human eye.Holographic 3D has 5 camera lens of disk
More than, by dot grating image Huo Ling shape raster holographics imaging can the comprehensive same image of viewing, can such as come to its border personally.
The 3D revolutions so far of First 3D video cameras are unfolded all around Hollywood weight pound sheet and important competitive sports.With
The appearance of 3D video cameras, this technology distance domestic consumer close step again.After the release of this video camera, we are from now on
3D camera lenses can be used to capture each unforgettable moment of life, such as the first step that child steps, celebration of graduating from university etc..
Usually there are two the above camera lenses for 3D video cameras.The function of 3D video cameras itself, can be by two just as human brain
Lens image is merged, and becomes a 3D rendering.These images can play on 3D TVs, and spectators wear so-called master
Dynamic formula shutter glasses may be viewed by, and can also pass through bore hole 3D display equipment direct viewing.3D shutter glasses can be with per second 60
Secondary speed enables the eyeglass fast crosstalk of left and right glasses switch.This means that each eye is it is seen that Same Scene is slightly shown not
Same picture, so brain can be thus to be the single photo presented with 3D in appreciation for it.
Existing 3D camera functions are single, can not provide more usage experiences to the user.
Invention content
The technical problem to be solved by the present invention is in order to overcome in the prior art 3D filming images terminal function it is single, can not
The defect for providing more usage experiences to the user provides a kind of various data informations that can be obtained on 3D models, and will use
The model of the front and back acquisition at family is compared to obtain the data capture method and mould based on 3D models of user's pattern of body form change
Type generating means.
The present invention is to solve above-mentioned technical problem by following technical proposals:
A kind of data capture method based on 3D models, feature are, are used for a model generating means, and the data obtain
The method is taken to include:
The model generating means obtain the first 3D models of a target user;
The model generating means obtain the 2nd 3D models with the first 3D model corresponding positions of the target user;
Obtain the values of disparity of the first the 3D models and the 2nd 3D models;
Export the values of disparity.
The first 3D models are obtained preferably, described and the values of disparity of the 2nd 3D models includes:
The first 3D models are obtained respectively and the 2nd 3D models enclose the volume in region;
The volume differences for calculating the first 3D models and the 2nd 3D models are the values of disparity.
The first 3D models are obtained preferably, described and the values of disparity of the 2nd 3D models includes:
Identify at least three target feature point, the target of the first 3D models respectively in the first 3D models and the 2nd 3D models
Characteristic point and the target feature point of the 2nd 3D models correspond;
The target feature point of first 3D models is Chong Die with the target feature point of the 2nd 3D models;
The distance for calculating corresponding pixel points in the pixel to the 2nd 3D models on the first 3D models is the values of disparity.
Preferably, the model generating means include a pallet, a support portion, a tumbler, a supporting rod and extremely
Few 3 3D video cameras,
The pallet is installed on by the tumbler on the support portion, and the pallet is existed by the tumbler
It is rotated around the axis horizontal of the tumbler on the support portion;
The supporting rod is perpendicular to plane where the pallet;
At least three 3D video cameras are longitudinally put on the supporting rod side by side;
The shooting direction of the 3D video cameras is from the point on the 3D video cameras to the axis.
Preferably, model generating means further include a processing end, the model generating means obtain the of a target user
One 3D models include:
Whole 3D images that pallet is rotated target user on one week shooting pallet by the 3D video cameras are sent to the place
End is managed,
Splice the pallet and rotate whole 3D images of Monday 3D video camera shooting to generate a 3D in the processing end
Submodel;
The 3D submodels that the processing end splicing whole 3D video cameras obtain are to generate the first 3D models.
Preferably, the data capture method includes:
For whole 3D images of 3D video camera shooting, the processing end identifies the feature on two adjacent 3D images
Point, and two adjacent 3D images are sutured in such a way that same characteristic features point overlaps;
For two adjacent 3D submodels, the processing end identifies the characteristic point on described two 3D submodels, and will
Two 3D submodels suture in such a way that same characteristic features point overlaps.
Preferably, the processing end is used to obtain mesh on the 3D models by the correspondence of pixel number and length
The distance between punctuate.
A kind of model generating means, feature are that the model generating means are for realizing data acquisition as described above
Method.
On the basis of common knowledge of the art, above-mentioned each optimum condition can be combined arbitrarily to get each preferable reality of the present invention
Example.
The positive effect of the present invention is that:The data capture method and model generation dress based on 3D models of the present invention
The 3D models of user's whole body can conveniently be obtained by setting, and obtain the various data informations that can be obtained on 3D models,
And the model of the front and back acquisition of user is compared to obtain the variation of user's build.
Description of the drawings
Fig. 1 is the structural schematic diagram of the model generating means of the embodiment of the present invention 1.
Fig. 2 is the flow chart of the data capture method of the embodiment of the present invention 1.
Fig. 3 is the flow chart of the data capture method of the embodiment of the present invention 2
Specific implementation mode
It is further illustrated the present invention below by the mode of embodiment, but does not therefore limit the present invention to the reality
It applies among a range.
Embodiment 1
Referring to Fig. 1, the present embodiment provides a kind of model generating means, the model generating means include a pallet 11, one
Support portion 12, a tumbler, 13,7 3D video cameras 14 of a supporting rod and a processing end.
In the present embodiment, the processing end is a computer, and the processing end can also be cloud server, by by data
It is transmitted to cloud server, data operation is carried out using cloud server.
The pallet is installed on by the tumbler on the support portion, and the pallet is existed by the tumbler
It is rotated around the axis horizontal of the tumbler on the support portion.
The supporting rod is perpendicular to plane where the pallet.
7 3D video cameras are longitudinally put on the supporting rod side by side.
The shooting direction of the 3D video cameras is from the point on the 3D video cameras to the axis 15.
Whole 3D images that the 3D video cameras are used to rotate on pallet user on one week shooting pallet are sent to the place
Manage end.
The processing end is used to splice the pallet and rotates whole 3D images of Monday 3D video camera shooting to generate
One 3D submodels.
The processing end is additionally operable to splice the 3D submodels of whole 3D video cameras acquisitions to generate a 3D models.
Specifically connecting method is:For whole 3D images of 3D video camera shooting, the processing end identifies two
Characteristic point on adjacent 3D images, and two adjacent 3D images are sutured in such a way that same characteristic features point overlaps;
For two adjacent 3D submodels, the processing end identifies the characteristic point on described two 3D submodels, and will
Two 3D submodels suture in such a way that same characteristic features point overlaps.
For a target user:
The model generating means are used to obtain the first 3D models of a target user;
The model generating means are used to obtain the 2nd 3D moulds with the first 3D model corresponding positions of the target user
Type;
The processing end is used to obtain the values of disparity of the first 3D models and the 2nd 3D models and exports the difference
Numerical value.
The processing end is additionally operable to obtain target point on the 3D models by the correspondence of pixel number and length
The distance between, the 3D models include the first 3D models and the 2nd 3D models.
Specifically, the volume in region is enclosed in the processing end for obtaining the first 3D models and the 2nd 3D models respectively,
And the volume differences for calculating the first 3D models and the 2nd 3D models are the values of disparity.
Referring to Fig. 2, using above-mentioned model generating means, the present embodiment also provides a kind of data recipient based on 3D models
Method, including:
Whole 3D images that pallet is rotated target user on one week shooting pallet by step 100, the 3D video cameras are sent
To the processing end;
Step 101, the processing end splice the pallet rotate whole 3D images that a Monday 3D video camera is shot with
Generate a 3D submodels;
For whole 3D images of 3D video camera shooting, the processing end identifies the feature on two adjacent 3D images
Point, and two adjacent 3D images are sutured in such a way that same characteristic features point overlaps, the present embodiment sutures successively in the above described manner
3D images are to generate the 3D submodels.
The 3D submodels that step 102, processing end splicing whole 3D video cameras obtain are to generate the target user's
First 3D models.
For two adjacent 3D submodels, the processing end identifies the characteristic point on described two 3D submodels, and will
Two 3D submodels suture in such a way that same characteristic features point overlaps.The present embodiment in the above described manner successively suture 3D submodels with
Generate 3D models.
Step 103, the model generating means obtain second with the first 3D model corresponding positions of the target user
3D models;
In the present embodiment, corresponding position refers to whole body.
The model generating means obtain the 2nd 3D models by the method for step 101 to step 103.
Step 104, the first 3D models of acquisition and the 2nd 3D models enclose the volume in region respectively.
Enclosed region is a solid space surrounded by 3D models.
Step 105, the volume differences for calculating the first 3D models and the 2nd 3D models, then export the volume differences.
The volume differences are values of disparity.For indicating the pattern of body form change of user.
The data capture method based on 3D models and model generating means of the present embodiment can conveniently obtain use
3D models of family whole body, and obtain and can obtain various data informations on 3D models, and by the mould of the front and back acquisition of user
Type is compared to obtain the variation of user's build.
Embodiment 2
The model generating means of the present embodiment are substantially the same manner as Example 1, the difference is that only:
The processing end for identifying at least three target feature point respectively in the first 3D models and the 2nd 3D models, the
The target feature point of one 3D models is corresponded with the target feature point of the 2nd 3D models;
The processing end is used for the target feature point of the first 3D models is Chong Die with the target feature point of the 2nd 3D models;
The processing end be additionally operable to calculate the first 3D models on pixel to the 2nd 3D models in corresponding pixel points away from
From for the values of disparity.
Referring to Fig. 3, accordingly, step 104 replaces with:
Step 1041 identifies at least three target feature point, the first 3D moulds respectively in the first 3D models and the 2nd 3D models
The target feature point of type and the target feature point of the 2nd 3D models correspond;
It is step 1042, the target feature point of the first 3D models is Chong Die with the target feature point of the 2nd 3D models;
The distance of corresponding pixel points is described in pixel to the 2nd 3D models in step 1043, the first 3D models of calculating
Values of disparity.
The data capture method based on 3D models and model generating means of the present embodiment can not only calculate difference number
Value, additionally it is possible to easily find out discrepancy, more convenient variation for finding out user's build.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these
It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back
Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed
Protection scope of the present invention is each fallen with modification.
Claims (8)
1. a kind of data capture method based on 3D models, which is characterized in that be used for a model generating means, the data acquisition
Method includes:
The model generating means obtain the first 3D models of a target user;
The model generating means obtain the 2nd 3D models with the first 3D model corresponding positions of the target user;
Obtain the values of disparity of the first the 3D models and the 2nd 3D models;
Export the values of disparity.
2. data capture method as described in claim 1, which is characterized in that described to obtain the first 3D models and the 2nd 3D
The values of disparity of model includes:
The first 3D models are obtained respectively and the 2nd 3D models enclose the volume in region;
The volume differences for calculating the first 3D models and the 2nd 3D models are the values of disparity.
3. data capture method as described in claim 1, which is characterized in that described to obtain the first 3D models and the 2nd 3D
The values of disparity of model includes:
Identify at least three target feature point, the target signature of the first 3D models respectively in the first 3D models and the 2nd 3D models
The target feature point of point and the 2nd 3D models corresponds;
The target feature point of first 3D models is Chong Die with the target feature point of the 2nd 3D models;
The distance for calculating corresponding pixel points in the pixel to the 2nd 3D models on the first 3D models is the values of disparity.
4. data capture method as described in claim 1, which is characterized in that the model generating means include a pallet, one
Support portion, a tumbler, a supporting rod and at least three 3D video cameras,
The pallet is installed on by the tumbler on the support portion, and the pallet is by the tumbler described
It is rotated around the axis horizontal of the tumbler on support portion;
The supporting rod is perpendicular to plane where the pallet;
At least three 3D video cameras are longitudinally put on the supporting rod side by side;
The shooting direction of the 3D video cameras is from the point on the 3D video cameras to the axis.
5. data capture method as claimed in claim 4, which is characterized in that model generating means further include a processing end, institute
State model generating means obtain a target user the first 3D models include:
Whole 3D images that pallet is rotated target user on one week shooting pallet by the 3D video cameras are sent to the processing end,
Splice the pallet and rotate whole 3D images of Monday 3D video camera shooting to generate a 3D submodules in the processing end
Type;
The 3D submodels that the processing end splicing whole 3D video cameras obtain are to generate the first 3D models.
6. data capture method as claimed in claim 5, which is characterized in that the data capture method includes:
For whole 3D images of 3D video camera shooting, the processing end identifies the characteristic point on two adjacent 3D images,
And two adjacent 3D images are sutured in such a way that same characteristic features point overlaps;
For two adjacent 3D submodels, the processing end identifies the characteristic point on described two 3D submodels, and by two
3D submodels suture in such a way that same characteristic features point overlaps.
7. data capture method as claimed in claim 5, which is characterized in that the processing end be used for by pixel number with
The correspondence of length obtains the distance between target point on the 3D models.
8. a kind of model generating means, which is characterized in that the model generating means are appointed for realizing in such as claim 1 to 7
One data capture method of meaning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810158703.5A CN108391113A (en) | 2018-02-26 | 2018-02-26 | Data capture method based on 3D models and model generating means |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810158703.5A CN108391113A (en) | 2018-02-26 | 2018-02-26 | Data capture method based on 3D models and model generating means |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108391113A true CN108391113A (en) | 2018-08-10 |
Family
ID=63068505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810158703.5A Pending CN108391113A (en) | 2018-02-26 | 2018-02-26 | Data capture method based on 3D models and model generating means |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108391113A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204065655U (en) * | 2014-08-27 | 2014-12-31 | 重庆海中蓝科技有限公司 | Article three-dimensional imaging capture apparatus |
CN204652507U (en) * | 2015-05-27 | 2015-09-16 | 西安算筹信息科技有限公司 | A kind of 3 D stereo scanning system |
CN106552412A (en) * | 2015-09-30 | 2017-04-05 | 捷荣科技集团有限公司 | A kind of fitness-assisting method and system based on anthropometric dummy self-recision |
CN106558009A (en) * | 2015-09-30 | 2017-04-05 | 捷荣科技集团有限公司 | A kind of body building method for pushing and system based on human body 3D models |
CN107644679A (en) * | 2017-08-09 | 2018-01-30 | 广东欧珀移动通信有限公司 | Information-pushing method and device |
-
2018
- 2018-02-26 CN CN201810158703.5A patent/CN108391113A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204065655U (en) * | 2014-08-27 | 2014-12-31 | 重庆海中蓝科技有限公司 | Article three-dimensional imaging capture apparatus |
CN204652507U (en) * | 2015-05-27 | 2015-09-16 | 西安算筹信息科技有限公司 | A kind of 3 D stereo scanning system |
CN106552412A (en) * | 2015-09-30 | 2017-04-05 | 捷荣科技集团有限公司 | A kind of fitness-assisting method and system based on anthropometric dummy self-recision |
CN106558009A (en) * | 2015-09-30 | 2017-04-05 | 捷荣科技集团有限公司 | A kind of body building method for pushing and system based on human body 3D models |
CN107644679A (en) * | 2017-08-09 | 2018-01-30 | 广东欧珀移动通信有限公司 | Information-pushing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9094675B2 (en) | Processing image data from multiple cameras for motion pictures | |
US8928734B2 (en) | Method and system for free-view relighting of dynamic scene based on photometric stereo | |
US8928654B2 (en) | Methods, systems, devices and associated processing logic for generating stereoscopic images and video | |
KR101602394B1 (en) | Image Blur Based on 3D Depth Information | |
US8711204B2 (en) | Stereoscopic editing for video production, post-production and display adaptation | |
US8358332B2 (en) | Generation of three-dimensional movies with improved depth control | |
CN108475327A (en) | three-dimensional acquisition and rendering | |
Devernay et al. | Stereoscopic cinema | |
CN108600729A (en) | Dynamic 3D models generating means and image generating method | |
CN108391116A (en) | Total body scan unit based on 3D imaging technique and scan method | |
CN107862718A (en) | 4D holographic video method for catching | |
CN108389253A (en) | Mobile terminal with modeling function and model generating method | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
CN108833890A (en) | Data processing equipment and method based on camera | |
CN108513122A (en) | Model method of adjustment based on 3D imaging technique and model generating means | |
CN108737808A (en) | 3D models generating means and method | |
US11792511B2 (en) | Camera system utilizing auxiliary image sensors | |
CN108391113A (en) | Data capture method based on 3D models and model generating means | |
CN101917637B (en) | Relighting method and system based on free-viewing angle light transmission matrix | |
CN108564651A (en) | Body scan data device and data creation method with data systematic function | |
CN108573526A (en) | Face snap device and image generating method | |
CN108514418A (en) | Data analysing method based on model generating means | |
CN108810517A (en) | Image processor with monitoring function and method | |
CN108550183A (en) | 3D model production methods and model generating means | |
CN109272453A (en) | Model building device and localization method based on 3D video camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180810 |