CN1549208A - Three-dimensional model sectional grain faced pattern treating method - Google Patents
Three-dimensional model sectional grain faced pattern treating method Download PDFInfo
- Publication number
- CN1549208A CN1549208A CNA031238726A CN03123872A CN1549208A CN 1549208 A CN1549208 A CN 1549208A CN A031238726 A CNA031238726 A CN A031238726A CN 03123872 A CN03123872 A CN 03123872A CN 1549208 A CN1549208 A CN 1549208A
- Authority
- CN
- China
- Prior art keywords
- image
- grid
- texture
- dimensional model
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Generation (AREA)
- Image Analysis (AREA)
Abstract
The method includes the following steps: providing one image to the 3D model; converting the image and the grain chartlet to the same spatial coordinates and dividing into several meshes; comparing the image and the grain chartlet in the spatial coordinates to pick up repeated meshes; performing weighing average calculation with the image strength of the repeated meshes to regulate the pixel strength of the image and the grain chartlet; taking the grains of the image or the grain chartlet as the grains of the meshes by means of one preset condition; continuing the pixels of the meshes; and returning the meshes to output the 3D model. The said method avoids discontinuous change in pixel strength and can ensure the quality of image the computer produces.
Description
Technical field
The present invention is a kind of pinup picture disposal route of three-dimensional model, be applied to image is conformed to a three-dimensional model, particularly a kind ofly be divided into image level, texture level and pixel level, and the hierarchy type texture mapping disposal route of the three-dimensional model of respectively the image pixel intensity on the texture being adjusted.
Background technology
Texture mapping (Texture mapping) technology is suggested in computer graphics (Computer graphics) field increases the fidelity that computing machine produces image (CGI).It is by image true to nature, and the texture mapping that produces on the three-dimensional model is set, and reaches with less grid surface demand, but has better image to represent (Rendering) effect.And along with the progress of IC designing technique, calculation function that texture mapping is required and texture memory are all included in most drawing chip.Therefore, in the application of different field, as recreation (Gaming), animation (Animation), 3D website, can find the image that utilizes texture mapping to produce image fidelity (Photo-realistic) effect now, the confession user views and admires, interaction.
In general, desire is attached to image (Image) on the three-dimensional model (3D model), at first, must design a complete and three-dimensional model of object digitizing accurately by artificial design or 3-D scanning.Next be the two dimensional image of shot object, and set up the projection relation between three-dimensional model and two dimensional image.Behind image that obtains the three-dimensional model of object, its certain angle and corresponding projection relation, we just can be projected in digitized three-dimensional model on the two dimensional image by projection relation.The image-region that each grid of three-dimensional model is contained in will being projected on the image so, again is set at the setting that the pairing texture of grid is promptly finished texture mapping.
When we with the texture mapping of an image in three-dimensional model, can utilize above-mentioned program to finish the setting of object texture pinup picture fast, and obtain a model (Textured model) that posts texture.But an image is not enough to contain the texture of object all surfaces, and this moment, broken hole or painted (Shading) not right problem can appear in picture showing (Rendering) result of model.Therefore, must utilize the image of many different angles just can intactly set the texture of each grid on the three-dimensional model, to solve this shortcoming.So when shot object different angles image, may be because the setting of the image of the shooting of different time, Different Light and different cameral parameter factors such as (amplify, dwindle, different focal, camera position) cause that same surface point has presenting of different colours in the image projection of different angles on the body surface.It is inconsistent that the texture color variation so can take place when we are attached to many images on the digitized three-dimensional model, and then make the picture showing result that visual defective be arranged.
Be head it off, we can utilize the three-dimensional model texture mapping that has configured, by modes such as projections, as project to right cylinder or spherome surface, produce an overall diagram (Global map),, utilize existing image processing software instrument again by manually repairing the mode of volume, as PhotoImpact , PhotoShop etc., go to adjust each inconsistent place of area pixel change color in the overall diagram one by one; But this need of work spends the result that many times just can obtain to the skilled art designing personnel of Flame Image Process instrument.And as No. the 6057850th, United States Patent (USP), it at each camera camera site, takes the subject image of different polishing directions by a polishing system that can control, locate, after again these are fixed the camera parameter position; But the image of different polishing situations engages (Stitch) together, be to mix (Pixel-wise blending) according to the intelligent pixel that polishing direction and camera shooting angle are done in various degree, so the shadow phenomenon that the different angles polishing is caused eliminates in the weighted average mode, obtains the image of a no light source influence.Again from then in the image cutting texture come out to be set on the three-dimensional model, the setting that so can do different polishings, different cameral location parameter again represents the image that makes new advances; But the polishing system that it needs special location just can obtain preferable effect, and is also uneconomical in the use.
In addition, as No. the 6281904th, United States Patent (USP), three-dimensional model is cut into several different plane domains to be formed, choose the drop shadow spread of each zone correspondence in image on the three-dimensional model in artificial mode, in different images, the scope of identical three-dimensional model zone institute projection mixes by variety of way again; What the method was considered is how the imaging of the same area (Region) in different images to be synthesized together, and the difference between untreated areas.In order to improve this patent, United States Patent (USP) proposes to utilize the idea of alpha blended (Alpha blending) for No. 6469710, set the percentage contribution for final texture of each pixel in the drop shadow spread of same three-dimensional model surf zone on different images, if the imaging of other object is arranged in the drop shadow spread, then setting this pixel contribution is zero, to eliminate the influence of error image pixel.Yet just at the projection of same three-dimensional model surf zone, therefore interregional texture variations can cause visual difference because of adjacent texture has different variations equally in its computing.
Summary of the invention
The present invention provides a kind of hierarchy type of three-dimensional model texture mapping disposal route for addressing the above problem, the texture of adjacent mesh can be because of its image from reasons such as different polishing situations and camera parameter settings, and produce the phenomenon that image pixel intensity has discontinuous variation, destroy image fidelity (Photo-realistic) quality that computing machine produces image (CGI).
The hierarchy type texture mapping disposal route of three-dimensional model disclosed according to the present invention, mainly the pinup picture image area is divided into image level (Image level), texture level (Texture level), three levels of pixel level (Pixellevel), (Pixel intensity) does adjustment in various degree at the image pixel intensity in each level scope.In the image level, have overlapping grid at image, utilize average weighted mode computed image pixel intensity mean value, and general image is adjusted.Then carry out the adjustment of texture level, with the texture of the grid of repeat region, utilize predetermined condition to judge, utilize the texture of one of them to calculate or the regular computing that replaces, and carry out the texture of grid and the obfuscation of the texture of adjacent mesh on every side, make texture more smooth-going.Again by the pixel level, adjust the color of handling the grid interior pixel at last, and can obtain best three-dimensional model.Simultaneously, the present invention can the formula that add up at the image document of input handle, and when new images enters, does not need the repetition and waste time to recomputate.
Description of drawings
Fig. 1 is a steps flow chart synoptic diagram of the present invention;
Fig. 2 is the processing flow chart of image level of the present invention;
Fig. 3 is the processing flow chart of texture level of the present invention;
Fig. 4 is the adjacent synoptic diagram of grid; And
Fig. 5 is the processing flow chart of pixel level of the present invention.
The figure number explanation
μ
S0~μ
S10Arithmetic mean strength
Embodiment
The present invention discloses a kind of hierarchy type texture mapping disposal route of three-dimensional model, see also Fig. 1, at first, provide several images to three-dimensional model (step 101), and wherein obtaining of image source can be by various angle shot entities, and need not need special polishing system as preceding case, then judge whether to have on the three-dimensional model and be texture mapping (step 102), if input picture is first image that conforms on the three-dimensional model, naturally be exactly "No", enter texture acquisition applying (Texture extraction and mapping) (step 103); If not first image, then be "Yes", then utilize the image level to adjust the adjustment that (step 104), texture level adjustment (step 105) and pixel level are adjusted three levels such as (steps 106).Then judge whether to have next image (step 107) then, in regular turn it is finished, export three-dimensional model (step 108) at last.Because by such design, whenever, only need to increase image and enter, can carry out accumulation calculating at once, and need not recomputate.
When it is judged as first image, carry out the texture acquisition and fit (step 103), its method below is described in detail in detail.After image and three-dimensional model are loaded into internal memory, set the projection relation matrix of image earlier, the projection relation matrix of image can obtain by artificial or automatic camera correction program.Next be to be projected on the true picture one by one by projection matrix the grid surface on the digitizing three-dimensional model, if the grid surface that is projected on the image is visual (Visible), then its zone of containing is set at its corresponding texture, otherwise, then skip, continue to handle next grid surface, until all Mesh Processing of image are intact, at this moment, because single image document only arranged, so the three-dimensional model that obtains may have considerable blank breach.
And when importing for second later image, then utilize the mode of three-layered to adjust, be respectively image level (Image level) and adjust (step 104), texture level (Texture level) and adjust (step 105) and adjust (step 106), below in detail the adjustment of three stratum is described in detail respectively with pixel level (Pixel level).At first, see also Fig. 2, before two image combinations, need earlier the texture mapping of three-dimensional model to be become with image segmentation the projection (step 201) of several grids, when cutting apart conversion certainly, must use identical volume coordinate, then capture the grid (step 202) of repetition, the grid that utilization repeats comes the brightness (step 203) of weighted average calculation pixel, adjusts the pixel intensity (step 204) of whole image, and formula is as follows:
I
s’(x
i,y
i)=I
s(x
i,y
i)-μ
s+μ
b
Wherein be μ
sFor the pixel intensity of the repeated grid of three-dimensional model average;
μ
bFor the pixel intensity of the repeated grid of input picture average;
I
s(x
i, y
i) be the pixel intensity of three-dimensional model each point; And
I
s' (x
i, y
i) for adjusting the pixel intensity of back three-dimensional model each point.
That is to say that to utilize the pixel intensity of repeated grid average, come pixel intensity, make that the pixel intensity of texture mapping and image can be done a preliminary adjustment after the image input general image (texture mapping and the input picture that comprise three-dimensional model).And because above-mentioned explanation only is second input picture, if the 3rd later image, the texture mapping on the three-dimensional model can be occupied higher proportion, and therefore the above-mentioned parameter that can add weights proportion again solves.
After the adjustment in pixel intensity of image, then adjust the texture (step 205) of repeated grid, the grid that repeats is judged with preset condition, as the direction of resolution sizes, grid, the angle of shooting etc., judge and adjust, for instance, if utilize resolution as Rule of judgment, then can directly utilize the texture of the bigger grid of resolution, as the texture of final grid, and all the other Rule of judgment also are identical principles.
After handling the adjustment (step 104) of image level, the adjustment (step 105) that enters into the texture level comes the texture of smoothing grid, as shown in Figure 3, include texture normalization (Texture normalization) (step 301) and texture obfuscation (Texture blurring) (step 302).Texture is just being planned the image pixel intensity of (step 301) main system with the pairing texture of grid of repetition, asks for the mean value in this grid respectively, then utilizes following formula to ask calculation:
T
s’(x
i,y
i)=T
s(x
i,y
i)-μ
s+μ
b
Wherein be μ
sFor the pixel intensity of the texture of the repeated grid of three-dimensional model average;
μ
bFor the pixel intensity of the texture of the repeated grid of input picture average;
T
s(x
i, y
i) be the texture pixel intensity of each point in the grid; And
T
s' (x
i, y
i) for adjusting the texture pixel intensity of each point in the grid of back.
Its principle is in above-mentioned identical, so be not repeated.Then carry out texture obfuscation (Texture blurring) (step 302), calculate the arithmetic mean strength μ of the corresponding texture of each grid surface earlier
S0(see figure 4) is calculated its arithmetic mean strength μ of grid surface texture on every side more in regular turn
S1~μ
S10, and utilize formula to calculate:
Utilize different condition (as distance, brightness etc.) to decide weights proportion, and make each grid texture can with the texture blend and the obfuscation of grid on every side, reduce the border of adjacent mesh, and smooth-going effect arranged.Certainly, the grid that is illustrated among the figure is a triangle, but does not limit a shape for this reason, and various geometric figure cuttings all can.
Then carry out the adjustment (step 106) of pixel level at last, see also Fig. 5, at first choose any grid (step 501) wherein, and judge its with arround the texture of grid whether have discontinuous change color (step 502), as do not have, then continue to judge whether to handle all grids (step 506), until all Mesh Processing is intact.And if judgement has discontinuous change color, capture a pixel (step 503) of boundary image earlier, and look for around immediate another picture point in the grid texture at this pixel, the image pixel intensity of boundary graph picture point is set at both weighted mean values (step 504), and note down the difference of adjusting front and back and adjust other interior image pixel intensity (step 505) of grid further, it can utilize following formula to do the adjustment of pixel intensity:
Wherein be w
iBe relevant weights proportion;
Id
iBe above-mentioned adjustment difference;
N is the quantity of adjusting altogether;
T
s(x
i, y
i) be the pixel intensity of three-dimensional model each point; And
T
s' (x
i, y
i) for adjusting the pixel intensity of back three-dimensional model each point.
In following formula, we can use the image pixel strength difference of (the polyhedral boundary line of N=number) on single (N=1) nearest boundary image pixel or the different boundary to adjust the interior image pixel of grid, and can make on the visual effect smooth-going variation is arranged, no longer include discontinuous change color, and the different brightness effects of two blocks are eliminated also.
The above only is the present invention's preferred embodiment wherein, is not to be used for limiting practical range of the present invention; Be that all equivalences of being done according to the present patent application claim change and modification, be all claim of the present invention and contain.
Claims (8)
1. the hierarchy type texture mapping disposal route of a three-dimensional model, in order to image being conformed to a three-dimensional model, and this three-dimensional model has a texture mapping, it is characterized in that this method comprises the following steps:
Provide an image to this three-dimensional model;
Change this image and this texture mapping to one same space coordinate and be divided into several grids;
In this volume coordinate, compare this image and this texture mapping, and capture the grid of repetition;
The pixel intensity of the grid by this repetition comes weighted average calculation to adjust the pixel intensity of this image and this texture mapping;
Utilize one pre-conditionedly, get the texture of the texture of one of them this image and this texture mapping as this grid;
The texture of this grid of smoothing;
The pixel of this grid of serialization; And
Reply this grid and export this three-dimensional model.
2. the hierarchy type texture mapping disposal route of three-dimensional model according to claim 1 is characterized in that this pre-conditioned system is selected from the direction of resolution, grid, the combination that angle constituted of this image taking.
3. the hierarchy type texture mapping disposal route of three-dimensional model according to claim 1 is characterized in that, the step of the texture of this this grid of smoothing includes texture normalization and texture obfuscation.
4. as the hierarchy type texture mapping disposal route of three-dimensional model as described in the claim 3, it is characterized in that this texture normalization system utilizes this image and this texture mapping image intensity corresponding to this grid, by the weighted average calculation adjustment.
5. as the hierarchy type texture mapping disposal route of three-dimensional model as described in the claim 3, it is characterized in that this texture obfuscation system utilizes the texture of this grid and adjacent this grid to come the weighted average calculation adjustment.
6. the hierarchy type texture mapping disposal route of three-dimensional model according to claim 1 is characterized in that the step utilization of the pixel of this this grid of serialization and this adjacent grid colour mixture.
7. as the hierarchy type texture mapping disposal route of three-dimensional model as described in the claim 6, it is characterized in that this colour mixture step comprises the following steps:
Pixel on the edge of the discontinuous grid of acquisition color; And
The weighted mean value of intensity of asking for this pixel and its surrounding pixel is as this intensity values of pixels.
8. as the hierarchy type texture mapping disposal route of three-dimensional model as described in the claim 7, it is characterized in that, also include after the step of intensity as this intensity values of pixels of this average this pixel and its surrounding pixel:
Calculate the poor of this weighted mean value and this pixel intensity; And
Adjust the intensity of remaining image in this grid by this pixel intensity difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 03123872 CN1277240C (en) | 2003-05-23 | 2003-05-23 | Three-dimensional model sectional grain faced pattern treating method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 03123872 CN1277240C (en) | 2003-05-23 | 2003-05-23 | Three-dimensional model sectional grain faced pattern treating method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1549208A true CN1549208A (en) | 2004-11-24 |
CN1277240C CN1277240C (en) | 2006-09-27 |
Family
ID=34321493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 03123872 Expired - Fee Related CN1277240C (en) | 2003-05-23 | 2003-05-23 | Three-dimensional model sectional grain faced pattern treating method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1277240C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103314396A (en) * | 2010-12-16 | 2013-09-18 | 佩特媒体株式会社 | Mosaic image processing device, method, and program using 3D information |
CN109859134A (en) * | 2019-01-30 | 2019-06-07 | 珠海天燕科技有限公司 | A kind of processing method and terminal of makeups material |
-
2003
- 2003-05-23 CN CN 03123872 patent/CN1277240C/en not_active Expired - Fee Related
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103314396A (en) * | 2010-12-16 | 2013-09-18 | 佩特媒体株式会社 | Mosaic image processing device, method, and program using 3D information |
CN109859134A (en) * | 2019-01-30 | 2019-06-07 | 珠海天燕科技有限公司 | A kind of processing method and terminal of makeups material |
Also Published As
Publication number | Publication date |
---|---|
CN1277240C (en) | 2006-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6226000B1 (en) | Interactive image editing | |
US7557812B2 (en) | Multilevel texture processing method for mapping multiple images onto 3D models | |
EP0812447B1 (en) | Computer graphics system for creating and enhancing texture maps | |
US6636212B1 (en) | Method and apparatus for determining visibility of groups of pixels | |
EP2181433B1 (en) | Methods and apparatus for multiple texture map storage and filtering | |
US6919903B2 (en) | Texture synthesis and transfer for pixel images | |
CN108280290B (en) | Concrete aggregate numerical model reconstruction method | |
US8928662B2 (en) | Apparatus, method, and system for demonstrating a lighting solution by image rendering | |
US9013499B2 (en) | Methods and apparatus for multiple texture map storage and filtering including irregular texture maps | |
DE102015113240A1 (en) | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SHADING USING A DYNAMIC OBJECT ROOM GATE | |
Krivánek et al. | Fast depth of field rendering with surface splatting | |
CN101044505A (en) | Adaptiv 3d scanning | |
US20150262413A1 (en) | Method and system of temporally asynchronous shading decoupled from rasterization | |
CN106500626A (en) | A kind of mobile phone stereoscopic imaging method and three-dimensional imaging mobile phone | |
CN1277240C (en) | Three-dimensional model sectional grain faced pattern treating method | |
CN107830800A (en) | A kind of method that fine elevation is generated based on vehicle-mounted scanning system | |
Baker | Object space lighting | |
CN101067870A (en) | High light hot spot eliminating method using for visual convex shell drawing and device thereof | |
Pomaska | Implementation of digital 3D-models in building surveys based on multi image photogrammetry | |
Stumpfel et al. | Assembling the sculptures of the parthenon | |
Scheiblauer et al. | Consolidated visualization of enormous 3d scan point clouds with scanopy | |
Hanusch | Texture mapping and true orthophoto generation of 3D objects | |
Song et al. | Photorealistic building modeling and visualization in 3-D geospatial information system | |
Oh | A system for image-based modeling and photo editing | |
CN118172468A (en) | Method and system for avoiding texture of moving target in three-dimensional reconstruction based on depth map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20060927 Termination date: 20200523 |
|
CF01 | Termination of patent right due to non-payment of annual fee |