CN1894728A - Method of and scaling unit for scaling a three-dimensional model - Google Patents

Method of and scaling unit for scaling a three-dimensional model Download PDF

Info

Publication number
CN1894728A
CN1894728A CNA2004800375276A CN200480037527A CN1894728A CN 1894728 A CN1894728 A CN 1894728A CN A2004800375276 A CNA2004800375276 A CN A2004800375276A CN 200480037527 A CN200480037527 A CN 200480037527A CN 1894728 A CN1894728 A CN 1894728A
Authority
CN
China
Prior art keywords
dimensional
output
input
data point
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2004800375276A
Other languages
Chinese (zh)
Other versions
CN100498840C (en
Inventor
P·-A·雷德特
A·H·W·范伊尤威克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of CN1894728A publication Critical patent/CN1894728A/en
Application granted granted Critical
Publication of CN100498840C publication Critical patent/CN100498840C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes

Abstract

A method of scaling a three-dimensional input model (100) in a threedimensional input space into a three-dimensional output model (200) which fits in a predetermined three-dimensional output space (104) is disclosed. The scaling is such that a first input surface (106) in the three-dimensional input space, having a first distance to a viewpoint, is projected to a first output surface (110) in the predetermined three-dimensional output space by applying a first scaling factor and whereby a second input surface (108) in the three-dimensional input space, having a second distance to the viewpoint, which is smaller than the first distance, is projected to a second output surface (112) in the predetermined three-dimensional space, by applying a second scaling factor which is larger than the first scaling factor.

Description

The method of convergent-divergent three-dimensional model and unit for scaling
The present invention relates to a kind of method that three-dimensional input model in the three-dimensional input space is scaled the three-dimensional output model that is fit to the predetermined three-dimensional output region.
The invention still further relates to the unit for scaling that is used for the three-dimensional input model of the three-dimensional input space is scaled the three-dimensional output model that is fit to the predetermined three-dimensional output region.
The invention still further relates to image display device, comprising:
-be used to receive the receiving trap of the signal of expression three-dimensional model;
-aforesaid unit for scaling; And
-be used for presenting the device that presents of 3-D view based on three-dimensional output model.
The present invention also all relates to the computer program that is loaded by computer equipment, and it comprises the instruction that the three-dimensional input model in the three-dimensional input space is scaled the three-dimensional output model that is fit to the predetermined three-dimensional output region.
The size of three-dimensional scence is very high with the unmatched possibility of display capabilities of the image display that shows this scene thereon, and the size of three-dimensional scence that for example comprises the house is greater than the stock size of display device.Therefore need zoom operations.Need the other reasons of convergent-divergent to be to make the geometric configuration of the three-dimensional model of representing three-dimensional scence to be fit to transmission channel or to make three-dimensional model be fit to beholder's preference.
It is known that the three-dimensional model of representing three-dimensional scence is carried out the linear scale operation.The convergent-divergent of depth information is to utilize the linearity of the depth information relevant with the depth range of output region to adapt to carry out in principle.As selection, the information that exceeds the display capabilities boundary has been cut.
The degree of depth adapts to or the defective of convergent-divergent is that it can cause reducing depth preception.Especially linear depth scalability is disadvantageous for the depth preception of convergent-divergent three-dimensional model, because the available depth scope of display device is less relatively usually.
The object of the present invention is to provide the method for type described in the beginning paragraph, it causes having the three-dimensional output model of the convergent-divergent of strong relatively three-dimensional sensation.
The object of the present invention is achieved like this: wherein by adopting first scale factor to project to first output face in the predetermined three-dimensional output region with having in the three-dimensional input space to first input face of first distance of viewpoint, and by adopt second scale factor bigger than first scale factor with have in the three-dimensional input space to viewpoint, project to second output face in the predetermined three-dimensional space than first second input face apart from littler second distance.In other words, the scale factor that is adopted is not a steady state value, but and the distance dependent between input face and the viewpoint.Less distance means bigger scale factor.The advantage of the method according to this invention is that three-dimensional input model is changed, and promptly transforms, so that keep at least even strengthened depth preception.Be noted that and utilize linear three-dimensional convergent-divergent, if the depth range of predetermined three-dimensional output region is a lot of less than the degree of depth of the three-dimensional input space, then depth preception is reduced.Utilize the method according to this invention can prevent that this from reducing.Randomly be, even strengthened depth preception.
In the embodiment of the method according to this invention, utilization will be positioned at the three-dimensional input model of first input face with respect to the perspective projection of viewpoint the first input data point projects to first output data point of the three-dimensional output model that is positioned at first output face, and the second input data point of utilizing perspective projection with respect to viewpoint will be positioned at the three-dimensional input model of second input face projects to second output data point of the three-dimensional output model that is positioned at first output face.In other words, the input data point is projected to the corresponding data output point based on single viewpoint.The advantage of this embodiment of the method according to this invention has been to keep the original depth preception of input three-dimensional model.The depth preception that this means original depth preception and output three-dimensional model is mutually the same basically.
The first input data point of utilizing perspective projection with respect to viewpoint will be positioned at the three-dimensional input model of first input face in another embodiment of the method according to this invention projects to first output data point of the three-dimensional output model that is positioned at first output face, and the second input data point of utilizing perspective projection with respect to another viewpoint will be positioned at the three-dimensional input model of second input face projects to second output data point of the three-dimensional output model that is positioned at first output face.In other words, the input data point is projected to corresponding data output point based on a plurality of viewpoints.The advantage of this embodiment of the method according to this invention has been to strengthen the original depth preception of input three-dimensional model.This means the depth preception of output three-dimensional model even be higher than the original depth preception of importing three-dimensional model.
Another object of the present invention is to be provided at the unit for scaling of type described in the beginning paragraph, and it is used to provide the convergent-divergent with strong relatively three-dimensional sensation three-dimensional output model.
This purpose of the present invention is achieved in that wherein unit for scaling comprises calculation element, be used to calculate coordinate with the output data point of the corresponding three-dimensional output model of each input data point of three-dimensional input model, thus by adopting first scale factor will be arranged in the input data point of the three-dimensional input space first input face, have to the first input data point of first distance of viewpoint and project to first output data point of the output data point of first output face that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face than first scale factor by employing, have to second of the littler second distance of ratio first distance of viewpoint and import the second output data point that data point projects to the second output face output data point that is arranged in the predetermined three-dimensional output region.
Another object of the present invention is to be provided at the image processing apparatus of type described in the beginning paragraph, and it is used to provide the convergent-divergent with strong relatively three-dimensional sensation three-dimensional output model.
This purpose of the present invention is achieved in that wherein unit for scaling comprises calculation element, be used to calculate coordinate with the output data point of the corresponding three-dimensional output model of each input data point of three-dimensional input model, thus by adopting first scale factor will be arranged in the input data point of the three-dimensional input space first input face, have to the first input data point of first distance of viewpoint and project to first output data point of the output data point of first output face that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face than first scale factor by employing, have to second of the littler second distance of ratio first distance of viewpoint and import the second output data point that data point projects to the output data point of second output face that is arranged in the predetermined three-dimensional output region.
Another object of the present invention is to provide the computer program of type described in the beginning paragraph, and it causes having the three-dimensional output model of convergent-divergent of strong relatively three-dimensional sensation.
This purpose of the present invention is to realize like this in computer program, after in being loaded into the computer equipment that comprises treating apparatus and storer, for described treating apparatus provides following ability: the coordinate of the output data point of the corresponding three-dimensional output model of each input data point of calculating and three-dimensional input model, thus by adopt first scale factor will be arranged in the three-dimensional input space first input face the input data point, have to the first input data point of first distance of viewpoint and project to first output data point of the output data point of first output face that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face than first scale factor by employing, have to second of the littler second distance of ratio first distance of viewpoint and import the second output data point that data point projects to the output data point of second output face that is arranged in the predetermined three-dimensional output region.
The improvement of unit for scaling and variant thereof can be corresponding to the improvement and the variants thereof of described image processing apparatus, method and computer program product.
With reference to implementation procedure and embodiment and accompanying drawing hereinafter described, these and other aspects of the method according to this invention, unit for scaling, image processing apparatus and computer program can become apparent and can obtain explanation, wherein:
Fig. 1 schematically shows the Zoom method according to prior art;
Fig. 2 A schematically shows first embodiment according to Zoom method of the present invention;
Fig. 2 B schematically shows second embodiment according to Zoom method of the present invention;
Fig. 3 schematically shows the mapping of input data point to the data element of memory device;
Fig. 4 schematically shows according to image processing apparatus of the present invention.
There is the model of several types to be used to store three-dimensional information, that is, and three-dimensional model:
-wire frame is for example specified VRML.These models comprise the structure of line and face.
-volume data structure or volume elements mapping (volume elements is represented volume element).These volume data structures comprise the cubical array of element.Each unit have three-dimensional and characterization value.For example CT (computing machine x ray laminagraphy) data are stored as the volume data structure, and wherein each element is corresponding to corresponding Hounsfield value (unit of expression X ray uptake).
-have a two dimensional image of depth map, the two dimensional image that for example has the RGBZ value.This represents that each pixel comprises three color component value and depth value.Three color component value are also represented brightness value.
-based on the model of image, for example stereo-picture to or multi-view image.The image of these types is also referred to as light field.
The three-dimensional model that will be converted to another kind of type by the represented data of one type three-dimensional model is possible.For example by wire frame or have the represented data of the two dimensional image of depth map and can be converted by being rendered as by the volume data structure or based on the data of the model representation of image.
The depth value amount that can utilize three-dimensional display apparatus to realize is determined in its type.Utilize the volume display device, depth dose is determined by the dimension of display device fully.The stereo display of utilization such as glass has soft boundary to the depth dose that depends on the observer.If cause depth dose too big by " conflict " between lens adaptation and the mutual eye focusing, the observer can become tired so.Such as having maximum in theory depth range based on the such auto-stereoscopic display device of LCD with the lenticulated screen that is used for various visual angles, this scope is definite by visual angle amount (mount of views).
Fig. 1 schematically shows the method according to the linear scale of prior art.Fig. 1 shows the three-dimensional output model 102 of the linear scale of three-dimensional input model 100 and suitable predetermined three-dimensional output region 104, for example corresponding to the range of observation of three-dimensional display apparatus.Convergent-divergent means each input data point I 1, I 2, I 3And I 4Be converted into corresponding output data point O 1, O 2, O 3And O 4For example described in Fig. 1, be positioned at the input data point I on first input face 106 on three-dimensional input model 100 borders 1Be mapped to the output data point O in first output face 110 that is positioned at predetermined three-dimensional output region 104 borders 1In Fig. 1, also described, be positioned at the input data point I on second input face 108 on three-dimensional input model 100 another borders 3Be mapped to the output data point O in second output face 112 that is positioned at predetermined three-dimensional output region 104 another borders 3Three-dimensional input model 100 and three-dimensional output model 102 all have the square shape.For example be positioned at the width W on first limit of first input face 106 (z=z3) of three-dimensional input model 100 i(z3) equal to be positioned at the width W on second limit of second input face 108 (z=z2) of three-dimensional input model 100 iAnd be positioned at the width W on first limit (z=z4) of first output face 110 of three-dimensional output model 102 (z2), o(z4) equal to be positioned at the width W on second limit (z=z1) of second output face 112 of three-dimensional output model 102 o(z1).
Fig. 2 A schematically shows according to Zoom method of the present invention.Fig. 2 A shows three-dimensional input model 100 and meets single vision point 1The three-dimensional output model 200 of proportional zoom.Although three-dimensional input model 100 has rectangular shape, three-dimensional output model 200 has trapezoidal shape.First width W on first limit (z=z3) of for example three-dimensional input model 100 i(z3) equal second width W on second limit (z=z2) of three-dimensional input model 100 i(z2), but first width W on first limit (z=z4) of three-dimensional output model 200 o(z4) be different from second width W on second limit of three-dimensional output model 200 o(z1).In other words, first width W of three-dimensional input model 100 i(z3) and second width W i(z2) be changed to first width W o(z4) and second width W o(z1) depend on the position of the different edge of three-dimensional input model 100 respectively, i.e. the z coordinate.
Described convergent-divergent is about to three-dimensional input model 100 and is transformed into predetermined three-dimensional output region 104, is exactly each output data point O 1, O 2, O 3And O 4Be placed or be positioned at and import data point I from each 1, I 2, I 3And I 4To single vision point 1The contiguous place (degree of accuracy that depends on calculating) of each bar line.For example, one first output data point O 3Be positioned at from the first input data point I 3To vision point 1The first line l 1On, and the second output data point O 2Be positioned at from the second input data point I 2To certain view V 1The second line l 2On.
As mentioned above, convergent-divergent according to the present invention is a kind of perspective projection that causes the distortion of three-dimensional input model.Provide some to calculate below deformation extent is shown.Difference between the variation on deformation extent and first limit and the variation on second limit is relevant.
Output data point O is arranged therein 1And O 2First width W on first limit of three-dimensional output model 200 o(z4)=‖ O 2-O 1‖ can utilize formula 1 to calculate:
W o ( z 4 ) = W i ( z 3 ) z 3 ( z 1 + d ) , - - - ( 1 )
Wherein, W i(z3)=‖ I 2-I 1‖ has input data point I 1And I 2First width on first limit of three-dimensional input model 100, z1 is the z coordinate on the border of predetermined three-dimensional output region 104, z3 is first a z coordinate of three-dimensional input model 100, and d is the degree of depth, i.e. the z scope of predetermined three-dimensional output region 104.
Output data point O is arranged therein 3And O 4Second width W on second limit of three-dimensional output model 200 o(z1)=‖ O 4-O 3‖ can utilize formula 2 to calculate:
W o ( z 1 ) = W i ( z 2 ) z 2 z 1 , - - - ( 2 )
Wherein, W i(z2)=‖ I 4-I 3‖ has input data point I 3And I 4Second width on second limit of three-dimensional input model 100, and z2 is the z coordinate on second limit of three-dimensional input model 100.
Two difference in size of exporting between the limit can utilize formula 3 to calculate.
D = W o ( z 1 ) - W o ( z 4 ) = W i ( z 2 ) z 2 z 1 - W i ( z 3 ) z 3 ( z 1 + d ) - - - ( 3 )
The relative mistake of two outputs between the limit is poor corresponding to the scale factor that is adopted for different edge (being face), referring to formula 4.
D rel = W o ( z 1 ) - W o ( z 4 ) W ( z 1 ) * 100 % - - - ( 4 )
Some examples in table 1, have been listed.Example 1.Suppose three-dimensional model 100 corresponding to scene, for example the room of 5m*5m*10m, i.e. W i(z2)=and 5m, W i(z3)=5m and z3-z2=5m.Predetermined three-dimensional output region 104 is corresponding to the volume display device with 10cm depth range (being d=0.10m).Vision point at observer 106 places 1And the distance between the volume display device equals 2m, i.e. z1=2m.First width W on first limit of the three-dimensional output model 200 of convergent-divergent so o(z4) equal 0.7m, and second width W on second limit of the three-dimensional output model 200 of convergent-divergent o(z1) equal 1m.Therefore, the difference that equals between 0.3m and the scale factor of the difference of two outputs between the limit is 30%.
Example 2.Adopted the majority value of example 1.Yet three-dimensional model 100 is corresponding to the long corridor of 5m*20m now.First width W on first limit of the three-dimensional output model 200 of convergent-divergent so o(z4) equal 0.35m, and second width W on second limit of the three-dimensional output model 200 of convergent-divergent o(z1) still equal 1m.Therefore, the difference of two outputs between the limit equals 0.65m and the difference of scale factor is 65%.
Example 3.The place has the width W of 0.5m at first arrow of z=z3 (actor) place i(z3) and at place, second arrow place of z=z2 has the width W of 0.5m i(z2).Difference between two arrows is 1m, i.e. z3-z2=1.The expression of first arrow is scaled to first output face 110 of the three-dimensional output model 200 of convergent-divergent.Width W o(z4) equal 0.1m.The expression of second arrow is scaled to second output face 112.Width W o(z1) equal 0.095m.Therefore, the difference of two outputs between the limit equals 0.005m and the difference of scale factor is 5%.
Example 4.Adopted the majority value of example 3.Yet predetermined three-dimensional output region 104 is corresponding to the volume display device with 5cm (being d=0.05m) depth range now.Therefore, two differences of exporting between the limit equal 0.007m, and the difference of scale factor is 7%.
Example 5.Adopted the majority value of example 3.Yet predetermined three-dimensional output region 104 is corresponding to the volume display device with 5cm (being d=0.02m) depth range now.Therefore, two differences of exporting between the limit equal 0.008m, and the difference of scale factor is 8%.
The example of table 1. convergent-divergent.
Example W i(z2) W i(z3) z1 z2 z3 d W o(z1) W o(z4) D D rel[%]
1 2 3 4 5 5 5 0.5 0.5 0.5 5 5 0.5 0.5 0.5 2 2 2 2 2 10 10 10 10 10 15 30 11 11 11 0.10 0.10 0.10 0.05 0.02 1 1 0.100 0.100 0.100 0.7 0.35 0.095 0.093 0.092 0.3 0.65 0.005 0.007 0.008 30 65 5 7 8
Fig. 2 B schematically shows second embodiment according to Zoom method of the present invention.Not single viewpoint but vision point in this embodiment 1, V 2Scope.Convergent-divergent is about to three-dimensional input model 100 and is converted to predetermined three-dimensional output region 104, makes each output data point O 1, O 2, O 3And O 4Be placed on or very near from each input data point I 1, I 2, I 3And I 4Pattern is to vision point 1, V 2Each bar line (degree of accuracy that depends on calculating) of scope.The first output data point O for example 1Be positioned at from the first input data point I 3To vision point 1The first line l 1On, and the second output data point O 2Be positioned at from the second input data point I 2To another vision point 2The second line l 2On.Preferably, each input face of three-dimensional input model is projected to the output face of corresponding three-dimensional output model based on its oneself viewpoint.When will the three-dimensional output model 202 shown in Fig. 2 B comparing, can find out that the distortion of three-dimensional input model 100 among the embodiment of deformation ratio in conjunction with the described the method according to this invention of Fig. 2 A of three-dimensional input model 100 in the embodiment in conjunction with the described the method according to this invention of Fig. 2 B is more obvious with the three-dimensional output model 200 shown in Fig. 2 A.Be noted that W o * ( z 4 ) < W o ( z 4 ) . The degree of additional deformation is relevant with the enhancing amount of depth preception.
Should be understood that replace to strengthen depth preception, as described in conjunction with Fig. 2 B, also can be by using a plurality of viewpoints, for example viewpoint the back of another viewpoint or above, realize that the limited of depth preception reduces.This point is not done description.
Fig. 3 schematically shows input data point I 1-I 11Be mapped among the data cell 304-320 of memory device 302.Memory device 302 comprises three-dimensional data structure.This three-dimensional data structure is represented the three-dimensional output region be scheduled to.Fig. 3 described have only a z in the three-dimensional data structure, x lamella (slice), promptly the y coordinate is constant.Different y, x lamella 330-340 (being that the z coordinate is constant) can be corresponding to the different view planes of volume display device.Different output data point O 1-O 13Numerical value (for example color and brightness) all be stored among each data cell 304-320 of memory device 302.For each input data point I 1-I 11, corresponding data cell is determined by corresponding color and brightness value and is filled.For example, input data point I 1Be mapped to output data point O 1And being stored in the data cell 312, and input data point I 6Be mapped to output data point O 6And be stored in the data cell 304.Mapping from the input data point to output data point is based on known coordinate, predetermined three-dimensional output region and the vision point of input data point 1The x of output data point, y, z coordinate are to be determined by formula 5,6,7 respectively.
O i(x)=f x(I i(x,y,z)) (5)
O i(y)=f y(I i(x,y,z)) (6)
O i(z)=f z(I i(x,y,z)) (7)
In this case, output data point O 1-O 6The z coordinate be following calculating.Six input data point I have been known that 1-I 6Be positioned at and pass through vision point 1A line l 1On.The z coordinate that is positioned at first output face on predetermined three-dimensional output region border is known.Line l 1On the first output data point O 1The z coordinate be set to this known z coordinate.The z coordinate of second output face that is positioned at another border of predetermined three-dimensional output region is known.Line l 1On last output data point O 6The z coordinate be set to this latter's z coordinate.Another z coordinate is five equidistant distances with the z scope division of predetermined three-dimensional output region.It should be noted that selectable mapping also is possible, vide infra.Prerequisite is that the order of input data point is constant.
As selection, output data point O 7-O 10Numerical value be based on input data point I 7-I 10Between distance calculate.In this case, output data point O 7-O 10The z coordinate be following calculating.4 input data point I have been known that 7-I 10Be positioned at and pass through vision point 1A line l 2On.The z coordinate that is positioned at first output face on predetermined three-dimensional output region border is known.Line l 2On the first output data point O 7The z coordinate be set to this known z coordinate.The z coordinate of second output face that is positioned at another border of predetermined three-dimensional output region is known.Line l 2On last output data point O 10The z coordinate be set to this latter's z coordinate.Z coordinate output data point O 8And O 9Be based on input data point I 7-I 10Between distance and the distance between first output face and second output face calculate.Alternatively, other output data points O 11-O 12Value be based on a plurality of input data points (I for example 7-I 10) interpolation calculate.
In another possibility, specific output data point O 13Value be based on other output data points and calculate, for example because corresponding input data point I 11Belong to the object in the three-dimensional input model.Three input data point I have been described in Fig. 3 5, I 9And I 11All at a line l 4On, represent that they belong to an object.Specific input data point I 11Be positioned at and pass through vision point 1Line l 3On.Because there is not the online l of other input data points 3On, so in which data cell, be arbitrarily, i.e. output data point O 13Being mapped to which z coordinate is arbitrarily.It can be stored in the data cell with first of a three-dimensional output model corresponding y, x lamella 330, perhaps be stored in second of three-dimensional output model corresponding last y, x lamella 340 in.Fig. 3 has described other selections.Output data point O 13The z coordinate put O based on output data 5And O 9The z coordinate.
It should be noted that above-mentioned storage scheme is an example.As selection, three-dimensional output model is not stored in thisly to has in the fixedly three-dimensional data structure that fixed pan is y, x lamella 330.A selection is not only to store brightness and/or color value from each output data point, also stores its coordinate.
Fig. 4 schematically shows according to image processing apparatus 400 of the present invention.This image processing apparatus 400 comprises:
-receiving element 402, be used to receive the three-dimensional input model of expression signal;
-unit for scaling 404 is used for three-dimensional input model is scaled three-dimensional output model; And
-present device 405, be used for presenting 3-D view based on three-dimensional output model.
Provide signal at input connector 410.For example this signal is the digital stream that meets the VRML specification data.This signal can be the broadcast singal that receives via antenna or cable, also can be the signal from the memory device such as VCR (video recorder) or digital versatile disc (DVD).
Unit for scaling 404 comprises calculation element 407, be used to calculate coordinate with the output data point of the corresponding three-dimensional output model of each input data point of three-dimensional input model, thus by adopting first scale factor will be arranged in the input data point of the three-dimensional input space first input face, have to the first input data point of first distance of viewpoint and project to first output data point of the output data point of first output face that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face than first scale factor by employing, have to second of the less second distance of ratio first distance of viewpoint and import second output data point that data point projects to the output data point of second output face that is arranged in the predetermined three-dimensional space.In conjunction with Fig. 2 A, Fig. 2 B and Fig. 3 Coordinate Calculation to output data point is described.
The calculation element 407 of unit for scaling 404 and present device 405 and can utilize a processor to realize.Relevant this point has not just been described like this.Usually, these functions are to realize under the control of software program product.In the process of implementation, usually software program product is loaded in the storer such as RAM, and therefrom carries out.Program also can such as being loaded in ROM, hard disk or magnetic and/or the optical memory, perhaps can load via the network such as the Internet from background memory.Can select application-specific IC that disclosed function is provided.
Image processing apparatus comprises the display device 406 that is used to show the output image that presents device alternatively.Image processing apparatus 400 for example can be TV.Do not comprise optional display device as selection image processing apparatus 400, but provide this output image to the device that comprises display device 406.Image processing apparatus 400 can be set-top box, satellite tuner, VCR player, DVD player or sound-track engraving apparatus and so on.Randomly image processing apparatus 400 comprises memory storage, such as hard disk or be used for going up at detachable media (for example CD) device of storage.Image processing apparatus 400 also can be the system that motion picture production field or broadcaster adopt.Another selection is that image processing apparatus 400 is included in the game station that is used for playing games.
It should be noted, be used for the viewpoint that the input data point projects output data point needn't be complementary with observer's physical location.In addition, the viewpoint that is used for the input data point is projected output data point needn't be complementary with another viewpoint that is used to present 3-D view.This represents that different viewpoints should be interpreted as the central point that is used to throw.These viewpoints have defined the geometric relationship between input data point and the output data point.
It should be noted that the foregoing description is example rather than restriction the present invention, and those skilled in the art can design alternative embodiment under the situation of the scope that does not break away from claims.In the claims, place any Reference numeral of bracket should not be understood that to limit claim." comprise " that a speech do not get rid of unlisted in the claims element of existence or step.The existence of a plurality of this elements do not got rid of in " one " or " one " speech before the element.The present invention can realize and realize by suitable programmed computer by the hardware that comprises a plurality of independent components.In enumerating the unit claim of several means, the wherein several of these devices can be included in one or more hardware.Any order is not represented in the use of word first, second and C grade.These speech will be interpreted as title.

Claims (8)

1, a kind of method that three-dimensional input model (100) in the three-dimensional input space is scaled the three-dimensional output model (200) that is fit to predetermined three-dimensional output region (104), wherein by adopting first scale factor to project to first output face (110) in the predetermined three-dimensional output region with having in the three-dimensional input space to first input face (106) of first distance of viewpoint, and by adopting second scale factor bigger than first scale factor to have the viewpoint of arriving in the three-dimensional input space, second input face (108) of the second distance littler than first distance projects to second output face (112) in the predetermined three-dimensional space.
2, the method for claim 1 is wherein utilized perspective projection with respect to viewpoint will be positioned at the first input data point first input face (106), three-dimensional input model (100) and is projected to and be positioned at first output data point first output face (110), three-dimensional output model.
3, method as claimed in claim 2 is wherein utilized perspective projection with respect to viewpoint will be positioned at the second input data point second input face (108), three-dimensional input model (100) and is projected to and be positioned at second output data point first output face (110), three-dimensional output model.
4, method as claimed in claim 2 is wherein utilized perspective projection with respect to another viewpoint will be positioned at the second input data point second input face (108), three-dimensional input model (100) and is projected to and be positioned at second output data point first output face (110), three-dimensional output model.
5, a kind of unit for scaling (404) that three-dimensional input model (100) in the three-dimensional input space is scaled the three-dimensional output model (200) that is fit to predetermined three-dimensional output region (104), comprise calculation element (407), be used to calculate coordinate with the output data point of the corresponding three-dimensional output model of each input data point of three-dimensional input model, thus by adopting first scale factor will be arranged in the input data point of the three-dimensional input space first input face (106), have to one first input data point of first distance of viewpoint and project to the output data point one first output data point of first output face (110) that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face (108) than first scale factor by employing, have to one second of the littler second distance of ratio first distance of viewpoint and import the output data point one second output data point that data point projects to second output face (112) that is arranged in the predetermined three-dimensional output region.
6, a kind of image processing apparatus (400) comprising:
-be used to receive the receiving trap (402) of the signal of the three-dimensional input model of expression;
-unit for scaling as claimed in claim 5 (404); And
-be used for presenting device (405) based on what three-dimensional output model presented 3-D view.
7, a kind of image processing apparatus as claimed in claim 6 (400) further comprises the display device (406) that is used to show 3-D view.
8, a kind of computer program that loads by computer installation, comprise the instruction that the three-dimensional input model (100) in the three-dimensional input space is scaled the three-dimensional output model (200) that is fit to predetermined three-dimensional output region (104), described computer installation comprises treating apparatus and storer, computer program provides following ability to described treating apparatus after being loaded: the coordinate of the output data point of the corresponding three-dimensional output model of each input data point of calculating and three-dimensional input model, thus by adopting first scale factor will be arranged in the input data point of the three-dimensional input space first input face (106), have to one first input data point of first distance of viewpoint and project to first output data point of the output data point of first output face (110) that is arranged in the predetermined three-dimensional output region, and the second bigger scale factor will be arranged in the input data point of the three-dimensional input space second input face (108) than first scale factor by employing, have to one second of the littler second distance of ratio first distance of viewpoint and import the one second output data point that data point projects to the output data point of second output face (112) that is arranged in the predetermined three-dimensional output region.
CNB2004800375276A 2003-12-19 2004-12-06 Method of and scaling unit for scaling a three-dimensional model Expired - Fee Related CN100498840C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03104818.4 2003-12-19
EP03104818 2003-12-19

Publications (2)

Publication Number Publication Date
CN1894728A true CN1894728A (en) 2007-01-10
CN100498840C CN100498840C (en) 2009-06-10

Family

ID=34707267

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004800375276A Expired - Fee Related CN100498840C (en) 2003-12-19 2004-12-06 Method of and scaling unit for scaling a three-dimensional model

Country Status (6)

Country Link
US (1) US7692646B2 (en)
EP (1) EP1697902A1 (en)
JP (1) JP4722055B2 (en)
KR (1) KR101163020B1 (en)
CN (1) CN100498840C (en)
WO (1) WO2005062257A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102326398A (en) * 2009-02-19 2012-01-18 索尼公司 Preventing interference between primary and secondary content in stereoscopic display
CN106030652A (en) * 2013-12-11 2016-10-12 Arm有限公司 Method of and apparatus for displaying an output surface in data processing systems
CN106484397A (en) * 2016-09-18 2017-03-08 乐视控股(北京)有限公司 The generation method of user interface controls and its device in a kind of 3d space

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007096816A2 (en) * 2006-02-27 2007-08-30 Koninklijke Philips Electronics N.V. Rendering an output image
EP2340534B1 (en) * 2008-10-03 2013-07-24 RealD Inc. Optimal depth mapping
JP5573238B2 (en) * 2010-03-04 2014-08-20 ソニー株式会社 Information processing apparatus, information processing method and program
JP5050094B2 (en) * 2010-12-21 2012-10-17 株式会社東芝 Video processing apparatus and video processing method

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5222204A (en) * 1990-03-14 1993-06-22 Hewlett-Packard Company Pixel interpolation in perspective space
US6654014B2 (en) * 1995-04-20 2003-11-25 Yoshinori Endo Bird's-eye view forming method, map display apparatus and navigation system
JPH09115000A (en) * 1995-10-13 1997-05-02 Hitachi Ltd Real time simulation device and video generation method
JP3504111B2 (en) * 1996-06-27 2004-03-08 株式会社東芝 Stereoscopic system, stereoscopic method, and storage medium for storing computer program for displaying a pair of images viewed from two different viewpoints in a stereoscopic manner
JPH11113028A (en) * 1997-09-30 1999-04-23 Toshiba Corp Three-dimension video image display device
US20020163482A1 (en) * 1998-04-20 2002-11-07 Alan Sullivan Multi-planar volumetric display system including optical elements made from liquid crystal having polymer stabilized cholesteric textures
EP1264281A4 (en) * 2000-02-25 2007-07-11 Univ New York State Res Found Apparatus and method for volume processing and rendering
JP2002334322A (en) * 2001-05-10 2002-11-22 Sharp Corp System, method and program for perspective projection image generation, and storage medium stored with perspective projection image generating program
KR100918007B1 (en) * 2002-01-07 2009-09-18 코닌클리즈케 필립스 일렉트로닉스 엔.브이. Method of and scaling unit for scaling a three-dimensional model and display apparatus
JP3982288B2 (en) * 2002-03-12 2007-09-26 日本電気株式会社 3D window display device, 3D window display method, and 3D window display program
US6862501B2 (en) * 2002-10-28 2005-03-01 Honeywell International Inc. Method for producing 3D perspective view avionics terrain displays
CN1973304B (en) * 2003-07-11 2010-12-08 皇家飞利浦电子股份有限公司 Method of and scaling unit for scaling a three-dimensional model
JP2006018476A (en) * 2004-06-30 2006-01-19 Sega Corp Method for controlling display of image
JP4701379B2 (en) * 2004-10-08 2011-06-15 独立行政法人産業技術総合研究所 Correction method and system for two-dimensional display of three-dimensional model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102326398A (en) * 2009-02-19 2012-01-18 索尼公司 Preventing interference between primary and secondary content in stereoscopic display
CN102326398B (en) * 2009-02-19 2015-04-08 索尼公司 Method and system for positioning primary and secondary image in stereoscopic display
CN106030652A (en) * 2013-12-11 2016-10-12 Arm有限公司 Method of and apparatus for displaying an output surface in data processing systems
CN106030652B (en) * 2013-12-11 2020-06-30 Arm有限公司 Method, system and composite display controller for providing output surface and computer medium
CN106484397A (en) * 2016-09-18 2017-03-08 乐视控股(北京)有限公司 The generation method of user interface controls and its device in a kind of 3d space

Also Published As

Publication number Publication date
WO2005062257A1 (en) 2005-07-07
JP4722055B2 (en) 2011-07-13
KR101163020B1 (en) 2012-07-09
EP1697902A1 (en) 2006-09-06
US7692646B2 (en) 2010-04-06
CN100498840C (en) 2009-06-10
KR20060114708A (en) 2006-11-07
JP2007515012A (en) 2007-06-07
US20090009536A1 (en) 2009-01-08

Similar Documents

Publication Publication Date Title
US20180192044A1 (en) Method and System for Providing A Viewport Division Scheme for Virtual Reality (VR) Video Streaming
CN100336076C (en) Image providing method and equipment
EP1719079B1 (en) Creating a depth map
US8270768B2 (en) Depth perception
US10237539B2 (en) 3D display apparatus and control method thereof
US20090244066A1 (en) Multi parallax image generation apparatus and method
CN1914643A (en) Creating a depth map
CN110310356B (en) Scene rendering method and device
CN1830217A (en) Multi-view image generation
CN1930512A (en) Multiview display device
CN103795961A (en) Video conference telepresence system and image processing method thereof
CN113873264A (en) Method and device for displaying image, electronic equipment and storage medium
CN102905141A (en) Two-dimension to three-dimension conversion device and conversion method thereof
CN110363837B (en) Method and device for processing texture image in game, electronic equipment and storage medium
CN1894728A (en) Method of and scaling unit for scaling a three-dimensional model
WO2001001348A1 (en) Image conversion and encoding techniques
CN108769648A (en) A kind of 3D scene rendering methods based on 720 degree of panorama VR
CN111327886B (en) 3D light field rendering method and device
CN101060642A (en) Method and apparatus for generating 3d on screen display
KR102065632B1 (en) Device and method for acquiring 360 VR images in a game using a plurality of virtual cameras
CN109525842B (en) Position-based multi-Tile permutation coding method, device, equipment and decoding method
CN114332356A (en) Virtual and real picture combining method and device
CN114979123A (en) System and method for preventing high-definition picture from being copied and accessing on demand
CN111476716B (en) Real-time video stitching method and device
CN1287331C (en) Method and equipment for estimating light source and producing mutual luminosity effect in public supporting space

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090610

Termination date: 20141206

EXPY Termination of patent right or utility model