WO2006061999A1 - Image conversion method, device, and program, texture mapping method, device, and program, and server-client system - Google Patents
Image conversion method, device, and program, texture mapping method, device, and program, and server-client system Download PDFInfo
- Publication number
- WO2006061999A1 WO2006061999A1 PCT/JP2005/021687 JP2005021687W WO2006061999A1 WO 2006061999 A1 WO2006061999 A1 WO 2006061999A1 JP 2005021687 W JP2005021687 W JP 2005021687W WO 2006061999 A1 WO2006061999 A1 WO 2006061999A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- parameter
- conversion
- pixel
- parameters
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 154
- 238000000034 method Methods 0.000 title claims description 112
- 238000013507 mapping Methods 0.000 title claims description 22
- 238000005286 illumination Methods 0.000 claims abstract description 58
- 238000012545 processing Methods 0.000 claims abstract description 40
- 230000008569 process Effects 0.000 claims description 39
- 230000009467 reduction Effects 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 8
- 150000001875 compounds Chemical class 0.000 claims 1
- 239000013598 vector Substances 0.000 description 63
- 230000006835 compression Effects 0.000 description 29
- 238000007906 compression Methods 0.000 description 29
- 238000010586 diagram Methods 0.000 description 24
- 230000002457 bidirectional effect Effects 0.000 description 12
- 238000003384 imaging method Methods 0.000 description 12
- 230000009466 transformation Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 238000009877 rendering Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000005684 electric field Effects 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000010287 polarization Effects 0.000 description 3
- DHGBAFGZLVRESL-UHFFFAOYSA-N 14-methylpentadecyl 16-methylheptadecanoate Chemical compound CC(C)CCCCCCCCCCCCCCC(=O)OCCCCCCCCCCCCCC(C)C DHGBAFGZLVRESL-UHFFFAOYSA-N 0.000 description 2
- 101000588145 Homo sapiens Microtubule-associated tumor suppressor 1 Proteins 0.000 description 2
- 101000588157 Homo sapiens Microtubule-associated tumor suppressor candidate 2 Proteins 0.000 description 2
- 102100031549 Microtubule-associated tumor suppressor candidate 2 Human genes 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000000280 densification Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000004836 empirical method Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000012212 insulator Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- JEYCTXHKTXCGPB-UHFFFAOYSA-N Methaqualone Chemical compound CC1=CC=CC=C1N1C(=O)C2=CC=CC=C2N=C1C JEYCTXHKTXCGPB-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 229910052900 illite Inorganic materials 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- VGIBGUSAECPPNB-UHFFFAOYSA-L nonaaluminum;magnesium;tripotassium;1,3-dioxido-2,4,5-trioxa-1,3-disilabicyclo[1.1.1]pentane;iron(2+);oxygen(2-);fluoride;hydroxide Chemical compound [OH-].[O-2].[O-2].[O-2].[O-2].[O-2].[F-].[Mg+2].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[K+].[K+].[K+].[Fe+2].O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2 VGIBGUSAECPPNB-UHFFFAOYSA-L 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- Image conversion method apparatus and program
- texture mapping method apparatus and program
- server client system server client system
- the present invention relates to an image processing technique, and more particularly to a technique for realizing image conversion such as enlargement or reduction, image compression, and texture mapping.
- Arbitrary image devices can be connected by the digital network of image devices and networks, and the degree of freedom of image exchange is increasing.
- an environment has been established in which users can freely handle images without being restricted by differences in systems. For example, users can output images taken with a digital still camera to a printer, publish them on a network, or view them on a home TV.
- Scalability refers to the ability to extract standard TV image data in some cases and HD TV image data in other cases from a single bit 'stream, and the degree of freedom to extract various image sizes.
- transmission is performed for each image format. Less transmission capacity is needed without preparing a route.
- Texture mapping is a technique that expresses the pattern and texture of an object surface by attaching a 2D image to the surface of the 3D object formed in the computer.
- processing such as enlargement, reduction, deformation, and rotation on the 2D image (see Non-Patent Document 1).
- Non-Patent Document 1 in order to newly generate image data that does not exist at the time of sampling, luminance values are interpolated by a bilinear method, a bicubic method, or the like (see Non-Patent Document 1). . Since interpolation can generate only intermediate values of sampling data, sharpness such as edges tends to deteriorate. Therefore, a technique is disclosed in which an interpolated image is used as an initial enlarged image, and thereafter, an edge portion is extracted to emphasize only the edge (see Non-Patent Document 2 and Non-Patent Document 3). In particular, Non-Patent Document 3 devised a technique for selectively performing edge enhancement according to the sharpness of an edge by introducing a multi-resolution expression and a Lipschitz index.
- Patent Document 1 JP 2005-149390 A
- Non-Patent Document 1 Shinya Araya, “Clear 3D Computer Graphics”, Kyoritsu Shuppan, pp 144—145, 25 September 2003,
- Patent Document 2 H. Greenspan, CH Anderson, “Image enhanced by non-linear extrapolation in frequect space”, SPIE Vol. 2182 Image and Video Processing II, 1994
- Non-Patent Document 3 Nakashige et al., “Multi-scale Image Resolution on Luminance Gradient Planes, ”IEICE Transactions D— ⁇ Vol. J81 -D-II No. 10 pp. 2249— 2258 Oct 1998
- Non-Patent Document 4 Multimedia Communication Study Group, “Point Illustrated Broadband + Mono Standard MPEG Textbook”, ASCII, pp. 25-29, February 11, 2003
- Non-Patent Document 5 Image Processing Handbook Editorial Committee, “Image Processing Handbook”, Shogodo, pp. 393, June 1987 80
- Non-Patent Document 6 Shinji Umeyama, “Separation of Diffuse Z Specular Reflection Components from Object Appearance Using Multiple Observations and Stochastic Independence through Polarization Filters”, Image Recognition 'Understanding Symposium 2002, pp 1-469-pp. 1-476, 2002
- edge enhancement of interpolated images during image enlargement and smoothing during image reduction are empirical methods, and no clear noise countermeasures are taken, so image quality after image conversion cannot be guaranteed. There are also problems.
- an object of the present invention is to make the image quality more stable in image conversion, image compression, and texture mapping by making it less susceptible to noise than in the past.
- the present invention as an image conversion method, for each pixel of the first image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are acquired, and the value of the parameter is determined for each parameter. Homogeneous regions with similar pixel power are specified, and for each parameter, the parameter conversion processing is performed for each specified homogeneous region according to the content of the image conversion, and each parameter after the conversion processing is used. The brightness of each pixel of the second image is obtained.
- a plurality of parameters constituting an illumination equation that gives luminance are respectively acquired for the first image to be subjected to image conversion.
- the parameters referred to here are, for example, the optical characteristics of the subject, environmental conditions, the surface normal of the subject, and the like.
- a homogeneous region is specified for each meter, and the parameter conversion process is performed for each specified homogeneous region according to the content of the image conversion.
- the luminance of each pixel of the second image after image conversion is obtained using each parameter after conversion processing.
- the luminance is decomposed into illumination equation parameters, and image conversion is performed using the correlation between pixels for each parameter.
- the illumination equation parameters are highly independent, such as surface normals and optical properties. For this reason, when processing is performed for each parameter, it is easier to grasp the peculiarities of noise than when processing is performed using the luminance given as the integral value of the norameter.
- the optical characteristics can be decomposed into diffuse reflection components and specular reflection components, which are highly independent factors, so that the peculiarities of noise can be emphasized.
- the homogeneous region is specified based on the similarity of the illumination equation parameters, which are physical characteristics of the subject, so that it is defined with physical support.
- the edge portion is stored as a boundary condition between the homogeneous regions. Therefore, it is possible to realize image conversion with stable image quality while preserving the sharpness of edges and texture. Also, the problem of noise mixing that does not require the direct detection of edges as in the prior art does not occur.
- the image conversion method of the present invention when image enlargement is performed, a process for increasing the density of the parameters may be performed as the conversion process for each parameter.
- the homogeneous region is defined with physical support. This Therefore, compared to the conventional empirical technique for edge enhancement of the initial enlarged image obtained by interpolation, the present invention for increasing the density of parameters for each homogeneous region is an objective one. The image quality can be further stabilized.
- a process for reducing the density of the parameter may be performed.
- the present invention for reducing the parameters for each homogeneous region is objective compared to the empirical method using a low-pass filter as in the prior art. The image quality can be made more stable.
- an image compression method for each pixel of an image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are respectively acquired, and the value of the parameter is similar for each parameter.
- a homogenous region having a pixel power is specified, and for each parameter, compression coding of the parameter is performed for each specified homogenous region.
- a plurality of parameters constituting an illumination equation for giving brightness are acquired for each image to be compressed. Then, a homogeneous region is specified for each parameter, and the parameter is compressed and encoded for each specified homogeneous region.
- the correlation between neighboring pixels is high with respect to the illumination equation parameter, so that the compression efficiency can be improved over the image compression based on the luminance value.
- the edge part is saved as a boundary condition between homogeneous regions. Therefore, it is possible to realize image compression with high compression efficiency while preserving the sharpness of the edges and the texture.
- the present invention provides a texture mapping method in which a preprocessing for pasting a texture image to an object of a three-dimensional CG model is performed, and brightness is given to each pixel of the texture image pasted to the object.
- a plurality of parameters constituting the lighting equation are obtained, a homogeneous region consisting of pixels with similar parameter values is identified for each parameter, and a predetermined image conversion is performed for each parameter.
- the parameter conversion processing is performed for each identified homogeneous region, and the luminance of each pixel of the object image is obtained using each parameter after the conversion processing.
- FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention.
- FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters.
- FIG. 3 is a conceptual diagram showing a geometric condition which is a premise of an illumination equation.
- FIG. 4 is a diagram for explaining an example of a surface normal vector measurement method.
- FIG. 5 is a diagram for explaining an example of a technique for separating diffuse reflection and specular reflection.
- FIG. 6 is a diagram for explaining a method of acquiring illumination equation parameters with reference to learning data.
- FIG. 7 is a diagram showing a pattern for determining a homogeneous region.
- FIG. 8 is a diagram showing an example of a unit area scanning method.
- FIG. 9 is a diagram showing an example of noise removal.
- FIG. 10 is a diagram showing processing for increasing the density of parameters for image enlargement.
- FIG. 11 is a diagram showing processing for reducing parameters for image reduction.
- FIG. 12 is a conceptual diagram showing parameter conversion processing for image compression in the second embodiment of the present invention.
- FIG. 13 is a diagram for explaining a third embodiment of the present invention, and shows a flow of a rendering process.
- FIG. 14 is a diagram illustrating a first configuration example that implements the present invention, and illustrates a configuration using a non-sonal computer.
- FIG. 15 is a second configuration example for realizing the present invention, and is a diagram showing a configuration using a server client system.
- FIG. 16 is a third configuration example for realizing the present invention, and is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
- FIG. 17 is a diagram showing the relationship between the position of the light source and the image taken with the wide-angle lens.
- FIG. 18 is a diagram showing a third configuration example for realizing the present invention, showing a configuration using a folding mobile phone.
- a predetermined value that gives luminance to each pixel of the first image is provided.
- a third step and a fourth step for obtaining the luminance of each pixel of the second image using each parameter after the conversion process in the third step are provided.
- the predetermined image conversion is image enlargement
- the conversion process in the third step is a process for increasing the density of the parameter. I will provide a.
- the predetermined image conversion is image reduction
- the conversion process in the third step is a process for reducing the density of the parameter.
- the image conversion method according to the first aspect, wherein the acquisition of the plurality of parameters in the first step is performed by measuring subject force or estimating the first image force.
- the image conversion method according to the first aspect wherein, in the second step, the degree of similarity is evaluated using variances of values of the parameters in a plurality of pixels.
- the image conversion method according to the first aspect wherein the second step includes a process of performing noise removal in the specified homogeneous region.
- a preprocessing step of pasting a texture image onto an object of a three-dimensional CG model, and a luminance for each pixel of the texture image pasted on the object A first step of acquiring each of a plurality of parameters constituting a predetermined illumination equation for giving a predetermined illumination equation, and a second step of specifying a homogeneous region having a pixel force with a similar value of the parameter for each of the parameters, and A third step of performing a conversion process of the parameter for each homogeneous region identified in the second step according to the content of predetermined image conversion for each parameter; and the third step And a fourth step of obtaining the luminance of each pixel of the image of the object using each parameter after the conversion processing in To provide things.
- a predetermined value that gives brightness to each pixel of the first image is provided.
- a parameter acquisition unit for acquiring a plurality of parameters constituting the illumination equation; For each parameter, a homogeneous region specifying unit that specifies a homogeneous region having a similar pixel value and also having a pixel power, and for each of the parameters, the homogeneous region specifying unit according to the content of the predetermined image conversion For each homogenous region identified by the parameter conversion unit that performs the conversion process of the parameter, and the luminance for obtaining the luminance of each pixel of the second image using each parameter after the conversion process by the parameter conversion unit Provided with a calculator.
- a pre-processing unit that pastes a texture image on an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted on the object
- a parameter acquisition unit that acquires each of a plurality of parameters constituting a given illumination equation to be given, and a homogeneous region specifying unit that specifies a homogeneous region composed of pixels having similar parameter values for each parameter!
- a parameter conversion unit that performs conversion processing of the parameter for each homogeneous region specified by the homogeneous region specification unit according to the content of predetermined image conversion for each parameter, and conversion processing by the parameter conversion unit
- a luminance calculation unit that obtains the luminance of each pixel of the image of the object by using each parameter later.
- the server client system that performs image conversion includes the server having the parameter acquisition unit, the homogeneous region specifying unit, and the parameter conversion unit of the eighth aspect, and the luminance calculation unit of the eighth aspect. And the client provides the server with an instruction to change the contents of the image.
- each pixel of the first image is used as a program for causing a computer to execute a method of performing a predetermined image conversion on the first image and generating a second image.
- a homogeneous region composed of pixels having similar values for each of the plurality of parameters constituting a predetermined illumination equation that gives brightness.
- a pre-processing step of pasting a texture image onto an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted onto the object A first step of acquiring a plurality of parameters constituting a given illumination equation to be given, and a second step of specifying a homogeneous region composed of pixels having similar parameter values for each of the parameters; In each of the parameters, a third step of performing conversion processing of the parameter for each homogeneous region specified in the second step according to the content of predetermined image conversion, and in the third step A fourth step of determining the luminance of each pixel of the image of the object using each parameter after the conversion process; It provides that causes a computer to execute.
- FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention. Note that the image conversion method according to the present embodiment can be realized by causing a computer to execute a program for realizing the method.
- Equation 1 and (Equation 2) are used as the illumination equations that give the luminance, and the homogeneous region is specified for each of a plurality of parameters constituting this equation. Then, for each homogeneous region, the parameter conversion process is performed to realize the predetermined image conversion.
- la is the brightness of the ambient light
- a is the reflectance of the ambient light
- Ii is the brightness of the illumination
- the vector N is the surface normal vector
- the vector L is the light source vector indicating the light source direction
- d co is the light source
- the solid angle is the bidirectional reflectance of the diffuse reflection component
- ps is the bidirectional reflectance of the specular reflection component
- F ⁇ is the Fresnel coefficient
- m is the microfacet distribution
- n is the refractive index
- Kd is the diffuse reflection component ratio
- ks is the specular reflection component ratio
- kd + ks l.
- Vector H is a half vector located between light source vector L and viewpoint vector V
- ⁇ is the angle between surface normal vector ⁇ and viewpoint vector V
- light source vector L, surface normal vector N, viewpoint vector V Force can also be calculated.
- FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters.
- (a) is a graph showing the luminance distribution of the image shown in (b)
- (c) to (f) are the bi-directional reflectance pd of the diffuse reflection component and the specular reflection component of the illumination equation parameters.
- 4 is a graph showing the distribution of bidirectional reflectance ps, diffuse reflection component ratio kd, and surface normal vector N, respectively.
- the horizontal axis is the spatial position
- the vertical axis is the brightness or the value of each parameter.
- Object XI is a luminance distribution that brightens from left to right
- object X2 is a regular, random luminance distribution
- object X3 is a luminance distribution with no and illite in the center
- object X4 is in all space positions. It has an isoluminance distribution.
- the bidirectional reflectance pd of the diffuse reflection component, the diffuse reflection component ratio kd, and the surface normal vector N force have homogeneous regions (AAl, AC1, AD1), respectively. Only the bidirectional reflectance ps of the surface reflection component changes. This change in pS causes a change in brightness.
- the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance ps of the specular reflection component, and the surface normal vector N force have homogeneous regions (AA2, AB1, AD2), respectively, and the diffuse reflection component ratio kd Only has changed.
- the diffuse reflection component ratio kd has a random change with no regularity, and the brightness also changes randomly to form a fine texture.
- the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance / 0 s of the specular reflection component, and the diffuse reflection component ratio kd have homogeneous regions (AA2, AB2, AC2). Only the surface normal vector N changes. This N change force causes a change in brightness.
- each parameter; 0 (1, s, kd, N has the same homogeneous area (AA3, AB3, AC3, AD3), so the luminance value is constant.
- the diffuse reflection component ratio kd The diffuse reflectance component is mainly high (Fig. 2 (e)), and the bidirectional reflectance pd of the diffuse reflection component is low! (Fig. 2 (c)), so the luminance value in the range of the object X4 is low! ,.
- Equation 1 Force If the parameter force constituting the illumination equation changes as much as the component force, the luminance changes. Therefore, it can be understood that edge detection is more stable for each parameter than for luminance change. In the present embodiment, since different homogeneous regions are adjacent to each other, the edge can be obtained more stably for parameters for which the homogeneous region is obtained more stably. Therefore, by converting each parameter for each homogeneous region, image conversion can be executed while preserving the sharpness and texture of the edges.
- initial setting is performed in step SOO.
- the first image to be subjected to image conversion is acquired, and the threshold value THEPR for homogeneous region determination, the threshold value THMEPR for homogeneous region merge determination, and the threshold value THN for noise determination are set. How to use these threshold values will be described later.
- step S10 for each pixel of the first image, a predetermined A plurality of parameters constituting the illumination equation are respectively acquired.
- the illumination equations of (Equation 1) and (Equation 2) described above are used.
- the ambient light luminance Ia, the ambient light reflectance pa, the light source luminance Ii, the light source vector L, and the solid angle d co of the light source are called environmental conditions, and the bidirectional reflectance pd of the diffuse reflection component, specular reflection
- the bidirectional reflectance ps of the component, the diffuse reflection component ratio kd, and the specular reflection component ratio ks are called optical characteristics. These give the luminance value Iv of the reflected light in the direction of the viewpoint according to the illumination equation shown in (Equation 1).
- FIG. 3 is a conceptual diagram showing the geometric conditions assumed by (Equation 1). As shown in Fig. 3, light from the light source is incident on the current point of interest P on the object surface SF with irradiance Ii (NL) d ⁇ , diffuse reflection component is kd pd, specular reflection component Is reflected by ks ps. Ambient light is light that enters the current attention point P on the object surface SF from the surroundings by multiple reflections, etc., and hits the bias component of the luminance Iv in the viewing direction (vector V).
- Each parameter of (Equation 1) can be obtained by measurement from a subject or estimation from a given captured image.
- the surface normal vector N can be measured by a range finder or the like using the principle of triangulation (see, for example, Non-Patent Document 5).
- the principle of triangulation is that the triangle is uniquely determined when the angles of one side and both ends of the triangle are determined.
- two points A separated by a known distance 1 , B is the angle at which point P is viewed as ⁇ and ⁇ , respectively.
- the coordinate values (x, y) of point ⁇ are given by
- Non-Patent Document 6 discloses a technique that utilizes the property that a specular reflection component is polarized.
- the electric field component parallel to the light incident / reflecting surface and the electric field component perpendicular to the light reflection surface are usually Since the channel coefficients are different, the reflected light is polarized.
- the specular reflection component is generally polarized, but diffuse reflection is irregular reflection, and thus has no polarization. Therefore, as shown in FIG.
- the intensity of the transmitted light RRP is the intensity of the component parallel to the polarization axis PFA of the polarizing filter PF in the reflected light RR. Therefore, when the specular reflection component from the object surface SF is observed while rotating the polarizing filter PF, the intensity of the transmitted light RRP is the angle between the polarizing axis PFA of the polarizing filter PF and the polarizing plane SP P of the specular reflection. It changes according to ⁇ and is given by the following equation.
- I people ⁇ , d +-(F v ⁇ ',) + F P ( ⁇ ⁇ )-(( ⁇ ',)-F p ( ⁇ ',)) cos 2 ⁇ )
- Ld is the luminance of the diffuse reflection component
- Ls is the luminance of the specular reflection component
- ⁇ 'i is the incident angle of the light on the minute reflection surface
- FP is the Fresnel coefficient of the parallel electric field component to the insulator
- FV is It is the Fresnel coefficient of the vertical electric field component for the insulator.
- the correspondence relationship between the spatial response characteristic and the illumination equation parameter is learned in advance, and the learning data is referred to when the parameter is acquired.
- the method to do is effective.
- the relationship between the image feature vector and the illumination equation parameter is learned in advance, and an image feature vector database 502 and an illumination equation parameter database 503 are prepared.
- the input image ⁇ as the first image is converted into an input image feature vector IINFV by the image feature analysis processing 501.
- the spatial response characteristic is obtained by, for example, wavelet transformation.
- the image feature vector database 502 selects the image feature vector closest to the input image feature vector IINFV, and outputs the input image feature vector number IINFVN.
- the illumination equation parameter database 503 receives the input image feature vector number IINFVN and outputs the illumination equation parameter corresponding to this as the input image illumination equation parameter IINLEP.
- the present invention can be applied to any method that does not limit the method of measuring and estimating the parameters of the illumination equation.
- the surface normal vector N is It is possible to estimate by using the generalized inverse matrix (Equation 9), which can obtain 3 or more image powers with different light source directions by the Kustero method (RJWoodham, "Photometric method for determining surface orientation” rrom multiple images, Optical Engineenn g 19, pp.l39-144 1980).
- the vector x is a vector obtained by collecting the surface normal vector p dN having the reflectance pd in the length for the number of times of photographing
- the matrix L is a light source matrix for collecting the number of light source vectors L for the number of times of photographing
- the vector V is a vector in which the luminance values Iv of reflected light in multiple viewpoint directions are collected for the number of times of shooting.
- the object surface is assumed to be a uniform diffuse surface (Lambertian surface), and the light source is assumed to be a point light source at infinity.
- the method of separating diffuse reflection and specular reflection is different from the method shown in Fig.
- step S20 for each parameter, a homogeneous region having a similar pixel value and a similar pixel value is specified.
- the similarity of the meters is evaluated by the variance of the parameters in a plurality of pixel areas.
- this variance value is smaller than the homogeneous region determination threshold THEPR set in step S00, the region of the plurality of pixels is determined to be a homogeneous region, and on the other hand, it is greater than or equal to the homogeneous region determination threshold THEPR.
- it is determined that the region is not homogeneous. In this case, it is presumed that all the pixels in the area are different from each other or that different homogeneous areas are included.
- the homogeneous region determination threshold THEPR is set to 0.5 degrees, for example, and when the variance value is smaller than 0.5 degrees, it is determined as a homogeneous area, and when it is greater than or equal to 0.5 degrees, it is determined as heterogeneous.
- the The diffuse reflection component ratio kd is a ratio and takes a value from 0 to 1. Therefore, the homogeneous region determination threshold THEPR is set to, for example, 0.01. When the variance value is smaller than 0.01, it is determined as a homogeneous region, and when the variance value is greater than or equal to 0.01, it is determined as a heterogeneous region.
- the setting of the area of the plurality of pixels for determining the similarity of the parameters is arbitrary, but here, a unit area having 5 pixels in the vertical direction and 5 pixels in the horizontal direction is used (S21).
- a unit area having 5 pixels in the vertical direction and 5 pixels in the horizontal direction is used (S21).
- 28 types of determinations from P01 to P28 as shown in FIG. 7 are performed, homogeneous regions can be extracted for all pattern shapes in the unit region UA.
- all the pixels included in the homogeneous area are adjacent to each other, and (2) the central pixel of the unit area UA must be included. Judgment is made in two stages for all 28 patterns. First, in the central area CA of 3 ⁇ 3, it is determined whether or not three gray pixels out of nine pixels are homogeneous.
- the pattern determined to be homogeneous it is determined whether or not the pattern is homogeneous including the hatched pixels outside the central area CA.
- the sum of them is taken as the homogeneous region.
- Step S22 When a homogeneous region is newly recognized (Yes in S22), the homogeneous region data that should follow the new homogeneous region is updated (S23). Steps S21 to S23 are repeatedly executed until the determination is completed for all unit areas (S24). As shown in Fig. 8, if the unit area UA of 5 pixels x 5 pixels is scanned so that one line overlaps horizontally and vertically, the homogeneous regions generated in the unit area UA are joined together, and the image It can be expanded to the whole.
- step S25 the similarity between a plurality of homogeneous regions recognized in adjacent unit regions is evaluated, and similar homogeneous regions are merged.
- the method of evaluating the similarity of homogeneous regions is arbitrary. For example, an average value of parameter values may be obtained for each unit region, and a determination may be made using the difference value of the average values. In other words, if the difference value is smaller than the homogeneous region merge determination threshold THMEPR set in step SO 0, the homogeneous region is identical. To merge.
- step S26 it is determined whether or not there is noise in the homogeneous region. For example, this determination is based on the average value of the parameter values of all pixels in the homogeneous region, and the difference between the parameter value of a pixel and this average value is larger than the noise determination threshold TH N set in step SOO. Sometimes this is determined as noise.
- the noise determination threshold THN is set to 30 degrees, for example, and when the difference from the average value is larger than 30 degrees, it is determined as noise.
- the noise determination threshold THN is set to 0.2, for example, and when the difference from the average value is larger than 0.2, it is determined to be noise.
- Fig. 9 shows an example of noise removal.
- Gray pixels are homogeneous regions
- PI and P2 are pixels that are determined to be noise.
- the average value of the parameter values of the pixels included in the homogeneous region is obtained from the 8 pixels around the pixel determined to be noise, and this is replaced with noise.
- pixel P1 since all the surrounding 8 pixels belong to the homogeneous region, replace with the average value of the parameter values of all the surrounding 8 pixels.
- pixel P2 two of the eight surrounding pixels belong to the homogeneous region, so they are replaced with the average value of these two pixels.
- the noise removal method described here is merely an example, and any method may be used.
- step S20 pixels that do not fall within the homogeneous region form edges.
- step S30 as the third step, for each parameter, the parameter is changed for each homogeneous region identified in step S20 according to the content of the predetermined image conversion. Perform the conversion process.
- FIG. 10 is a conceptual diagram showing processing when image enlargement is performed as image conversion.
- Fig. 10 when enlarging the image, the parameters are made dense within the homogeneous region.
- Figure 10 (a) shows the distribution of the parameters before conversion.
- the homogeneous region AE1 where the average parameter value is P1 is adjacent to the homogeneous region AE2 where the average parameter value is P2.
- the luminance difference edge of the pixels SI and S 2 located at the boundary between the homogeneous regions AE1 and AE2 is formed.
- Fig. 10 (b) In order to enlarge the image of Fig. 10 (a), for example, by a factor of 2, it is shown in Fig. 10 (b).
- a white circle pixel may be inserted between each black circle pixel.
- the parameter value of the white circle pixel is, for example, the parameter value of the adjacent black circle pixel.
- a new pixel S3 may be generated by copying either parameter value as it is between the pixels SI and S2.
- the parameter value of the pixel S1 is copied to the pixel S3, and the luminance difference between the pixels S2 and S3 is made to match the luminance difference between the pixels SI and S2 in FIG. 10 (a). This preserves the edges.
- FIG. 11 is a conceptual diagram showing a process when image reduction is performed as image conversion.
- the parameters are reduced in the homogeneous region.
- the density reduction method is arbitrary, in Fig. 11, the average value of the parameter values of the surrounding pixels is used as an example.
- Figure 11 (a) shows the distribution of the parameters before conversion.
- the homogeneous region AF1 where the average parameter value is P1 is adjacent to the homogeneous region AF2 where the average parameter value is P2.
- the luminance difference edge of the pixels S6 and S7 located at the boundary between the homogeneous regions AF1 and AF2 is formed.
- the average value of the parameter values in the pixel group SG1 is set as the parameter value of the pixel S4, and the average value of the parameter values in the pixel group SG2 is set as the parameter value of the pixel S5, thereby realizing a reduction in density.
- the change in the parameter value of the reduced image is smoothed.
- the luminance difference between the pixels S6 and S7 which are the edges in FIG. 11A is stored as the luminance difference between the pixels S7 and S8 in FIG. That is, the parameter value of pixel S8 is copied from pixel S6.
- step S40 as the fourth step, the luminance of each pixel of the second image after the predetermined image conversion using each parameter after the conversion processing in step S30 Ask for.
- each parameter is given to the lighting equation of (Equation 1), the reflected light intensity Iv is calculated for each pixel.
- the luminance is decomposed into illumination equation parameters, and image conversion such as image enlargement or image reduction is performed using the correlation between pixels for each parameter. That is, since image conversion is executed for each parameter for each homogeneous region, the edge portion is stored as a boundary condition between the homogeneous regions. In addition, since the homogeneous region is specified based on the similarity of the illumination equation parameters that are physical characteristics of the subject, it is determined with physical support. Therefore, image conversion with stable image quality can be realized while preserving the sharpness of the edge and the texture.
- a parameter acquisition unit that executes step S10 a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a luminance calculation unit that executes step S40 are provided.
- An image conversion apparatus may be configured.
- the illumination equation used in the present invention is not limited to that shown in the present embodiment.
- the following may be used.
- Equation 5 is for a diffusely reflecting object, and has 6 parameters.
- Iv, a represents the light intensity in the direction of the peripheral force line of sight.
- Equation 6 does not distinguish between diffuse reflection and specular reflection, and there are five parameters.
- Equation 7) does not consider the reflectivity, and there are two parameters.
- Iv, i represents the pixel power of interest and the light intensity in the line-of-sight direction.
- step S30 processing for compression-encoding each parameter is performed for image compression.
- the compressed image data is transferred or recorded without executing step S40.
- each parameter is decoded and the luminance of each pixel is calculated.
- the image conversion method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method.
- an image compression apparatus including a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, and a parameter compression unit that executes step S30 may be configured.
- FIG. 12 is a conceptual diagram showing parameter conversion processing in the present embodiment.
- the white circles represent the parameter values of the pixels belonging to the homogeneous regions AG1 to AG3, and the hatched circles represent the parameter values of the pixels not belonging to the homogeneous region.
- the parameter values are almost equal in the homogeneous regions AG1 to AG3, and therefore the amount of information related to the parameter values is almost integrated into the average value. Therefore, in each homogeneous region AG1 to AG3, the average value of the parameter value and the difference between the parameter value and the average value of each pixel are encoded, and a small amount of code is assigned to the difference. To do. As a result, it is possible to perform compression coding of the meter value with a small amount of code without impairing the image quality.
- the code type TP1 is declared (here, "difference from the average value"), and then the average
- Dl and the difference D2 from the average value at each pixel are followed by D2, followed by the separator signal SG1.
- a special code may be assigned as the code type so that the separation can be recognized. If the difference D2 is so small that it can be ignored, apply the run-length code.
- the homogeneous region occupies most of the range in the image, it is not a problem to code the parameter values of pixels that do not belong to the homogeneous region as they are.
- the homogeneous regions AG2 and AG3 declare "difference from the average value" as the encoding types TP3 and TP4 as in the homogeneous region AG1.
- a higher correlation than the luminance value can be expected by decomposing the luminance value into parameters constituting the luminance value and obtaining the correlation with the neighboring pixels, thus improving the compression efficiency. Can be made.
- compression coding is performed for each homogeneous region, it is possible to achieve a higher compression ratio than the luminance value base while preserving sharpness and texture.
- the image conversion method as described above is applied to texture mapping in computer graph status.
- FIG. 13 is a flowchart showing the main flow of the rendering process.
- the rendering process is a process of converting a three-dimensional model generated in a computer into two-dimensional image data in computer graphics (see, for example, pp. 79 of Non-Patent Document 1). As shown in Figure 13, the rendering process is the main step S101, coordinate transformation S102, hidden surface removal S 103, shading and shadowing S 104, texture mapping S105, and viewport transformation S 106 .
- step S101 when the viewpoint VA and the light source LS are set, the appearance is determined.
- step S102 the objects managed in the local coordinate system are grouped into a regular coordinate system, and in step S103, the hidden surface portion that cannot be seen from the viewpoint is deleted. Then, in step S104, the light source LS force is also calculated as how light strikes the objects OA and OB, and a shade and a shadow are generated.
- step S 105 texture mapping is performed to generate textures TA and TB for the objects OA and OB.
- the texture is generally acquired as image data.
- the texture image TIA is deformed according to the shape of the object OA and is synthesized on the object OA.
- the texture image TIB is matched to the shape of the object OB. Then transform it and compose it on the object OB.
- the image conversion as described above is applied in this texture mapping. That is, first, pre-processing for pasting the texture images TIA and TIB to the objects OA and OB of the 3D CG model is performed. Then, processing is performed according to the flow of FIG. In step S10, each of the texture images TIA and TIB pasted to the objects OA and OB using the optical parameters of the two-dimensional texture images TIA and TIB and the surface normal vectors of the objects OA and OB. The parameter is acquired for the pixel. The subsequent processing is the same as in the first embodiment. Note that the texture mapping method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method.
- a preprocessing unit that performs the above-described preprocessing, a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a step A texture mapping device including a luminance calculation unit that executes S40 may be configured.
- step S106 viewport conversion is performed to generate a two-dimensional image having an image size matching the displayed screen SCN or window WND.
- the rendering processing needs to be executed because the viewpoint and the position of the light source change, and the rendering processing is frequently repeated in an interactive system such as a game device.
- texture mapping texture data to be pasted on the object surface is usually prepared as an image. Therefore, whenever the viewpoint or light source changes, it is necessary to convert the texture data by enlarging, reducing, rotating, or changing colors.
- FIG. 14 is a diagram showing a first configuration example, which is related to the present invention using a personal computer.
- 2 is an example of a configuration for performing image conversion.
- the resolution of the camera 101 is lower than the resolution of the display 102.
- an enlarged image is created by an image conversion program loaded in the main memory 103.
- the low resolution image captured by the camera 101 is recorded in the image memory 104.
- An image feature vector database 502 and an illumination equation parameter database 503 as shown in FIG. 6 are prepared in advance in the external storage device 105, and the image conversion program capability of the main memory 103 can be referred to.
- the processing by the image conversion program is the same as in the first embodiment, and a homogeneous region is determined for each illumination equation parameter, and densification is performed in the homogeneous region. That is, a low-resolution image in the image memory 104 is read via the memory bus 106, enlarged in accordance with the resolution of the display 102, and transferred again to the video memory 107 via the memory bus 106. The enlarged image transferred to the video memory 107 is displayed on the display 102.
- the present invention can take various configurations other than those constrained by the configuration of FIG.
- the illumination equation parameters may be measured directly from the subject by a measuring instrument.
- the image feature vector database 502 and the illumination equation parameter database 503 of the external storage device 105 are not necessary.
- acquiring low resolution images from the network 108 does not help. It is also possible to store the texture data in the external storage device 105 and execute the texture mapping shown in the third embodiment in the main memory 103.
- the image conversion program loaded in the main memory 103 may perform image reduction as shown in the first embodiment.
- the image compression may be performed according to the second embodiment, and the illumination equation parameters are data-compressed and the network 108 isotropic force can be transmitted.
- the camera 101 any type of imaging device such as a camera-equipped mobile phone, a digital still camera, or a video movie, can be applied. Furthermore, the present invention can be realized in a playback device that plays back pre-recorded video.
- FIG. 15 is a diagram showing a second configuration example, which is an example of a configuration for performing image conversion according to the present invention using a server client system.
- the resolution of the camera 201 is lower than the resolution of the display 202.
- image enlargement is executed in the server-client system.
- the server 301 includes an image feature analysis unit 501, an image feature vector database 502, and an illumination equation parameter database 503.
- the server 301 calculates the input image repulsive illumination equation parameter IINL EP and sets the parameter operation unit 205. Output to. This operation corresponds to step S10 in the flow of FIG.
- the image feature analysis unit 501, the image feature vector database 502, and the illumination equation parameter database 503 constitute a parameter acquisition unit.
- an image conversion instruction is passed from the image conversion instruction unit 203 of the client 302 to the parameter operation instruction unit 204 of the server 301 as an image conversion instruction signal ICIS.
- the norometer operation instruction unit 204 replaces the content of the image conversion by the image conversion instruction signal ICIS with the operation content of the illumination parameter, and outputs it to the parameter operation unit 205 as the parameter operation instruction signal LEPS.
- the parameter operation unit 205 operates the illumination equation parameter IINLEP to perform image enlargement or image compression, and generates a new parameter value IOUTLEP. This operation corresponds to steps S20 and S30 in the flow of FIG.
- the meter operation unit 205 corresponds to the homogeneous region specifying unit and the parameter converting unit.
- the server 301 can provide the client 302 with the new parameter value IOUTLEP according to the image conversion instruction from the client 302 via the network 206.
- an image generation unit 207 as a luminance calculation unit generates an enlarged image and supplies it to the display 202. This operation corresponds to step S40 in the flow of FIG.
- the present invention is not limited to the configuration of FIG. 15, and when the resolution of the camera 201 is higher than the resolution of the display 202, the parameter operation unit 205 is as shown in the first embodiment. Image reduction may be performed. Further, if the parameter operation unit 205 operates as an encoding device according to the second embodiment, and the image generation unit 207 operates as a decoding device. , Compressed data can be distributed to the network 206.
- the combination of image devices and the position of each means on the system are arbitrary.
- the camera 201 any type of imaging device such as a mobile phone with camera, a digital still camera, or a video movie camera can be applied.
- the present invention can also be realized in a playback apparatus that plays back pre-recorded video.
- FIG. 16 is a diagram showing a third configuration example, which is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
- the camera 401 includes a wide-angle lens 402, and can, for example, capture a wide field of view with an angle of view of 180 degrees at a time.
- the light source 403 can be photographed by attaching the wide-angle lens 402 facing upward.
- a wide-angle xyz three-dimensional coordinate system with the optical axis of the wide-angle lens 402 as the z-axis, the horizontal direction of the wide-angle image sensor 404 inside the camera 401 as the X-axis, and the vertical direction of the wide-angle image sensor 404 as the y-axis
- the focal position of lens 402 is determined as the coordinate origin, and the light source vector L is obtained.
- FIG. 17A shows the relationship between the position of the light source 403 and the wide-angle image 405 taken by the wide-angle lens 402.
- the light source 403 moved from the position PS1 on the curve LT to the position PS5 is recorded from the position PXI on the straight line ST of the wide-angle image 405 to the position PX5.
- a method for obtaining the light source vector L2 will be described in which the angle formed by the straight line ST and the x axis is 0, and the angle formed by the straight line ST and the light source vector L2 is ⁇ .
- Figure 17 (b) is a view of the wide-angle image 405 of Figure 17 (a) from the z-axis direction.
- the distance between position PX1 and coordinate origin O is d, and the distance between position PX2 and coordinate origin O is r.
- the pixel positions at position ⁇ 1, position ⁇ 2, and coordinate origin ⁇ on the wide-angle image are ( x, y), (x, y), ( ⁇ , y), the distance d between the position PX1 and the coordinate origin O is
- Figure 17 (c) shows a triangle obtained by subtracting the intersection line LT with the light source vector L2 from the position PX2 in the z-axis direction. If the length of the intersection line LT is z, the following equation is obtained.
- the subject is photographed by the subject photographing lens 406 and the subject imaging element 407, and the first image output from the subject photographing element 407 is converted into the second image by the image conversion unit 408.
- the image conversion unit 408 executes, for example, image enlargement according to the flowchart in FIG. 1, image compression according to FIG.
- the coordinate system used for image conversion it is preferable to use the xyz three-dimensional coordinate system of the subject imaging element 407 because the image conversion is performed on the output of the subject imaging element 407. Therefore, the light source vector (Expression 14) expressed in the xyz three-dimensional coordinate system of the wide-angle imaging element 404 is converted into the xyz three-dimensional coordinate system of the subject imaging element 407.
- the transformation of the coordinate system can be realized by transformation of the coordinate axes. Kool (X, y, z
- ught light Is a vector in which the x-axis of the xyz three-dimensional coordinate system of the wide-angle imaging device 404 is represented by the xyz three-dimensional coordinate system of the subject imaging device 407: Kutor (X, y, z) is
- the X axis of the xyz three-dimensional coordinate system of the image sensor 404 for wide angle, light, light, x light, x light is a vector expressed in the xyz three-dimensional coordinate system of the image sensor 404 for wide angle. If the y-axis and z-axis are defined in the same way as the X-axis, the vector of each axis is related to the 3 X 3 matrix M as follows.
- the light source vector L is converted from the xyz three-dimensional coordinate system of the wide-angle imaging element 404 to the xyz three-dimensional coordinate system of the subject imaging element 407.
- the light source is often located above the camera 401, for example, if the wide-angle lens 402 having an angle of view of 180 degrees is used, the light source 403 can be photographed. If 403 cannot be captured by the angle of view of the wide-angle lens 402, the direction of the camera 401 is changed and the light source 403 is captured by the angle of view. Therefore, since it is necessary to measure the change in the orientation of the camera 401, the camera 401 has a built-in 3D attitude sensor 409 (consisting of an acceleration sensor, etc.) to measure the 3D motion of the xyz 3D coordinate axis of the wide-angle imaging element 404. If it is acquired from the dimensional attitude sensor 409 and coordinate-transformed in the same way as (Equation 16),
- the mobile phone 601 includes a far-end camera 602 (a camera that captures a subject in front of the user of the mobile phone 601) and a local camera 603 (a user of the mobile phone 601).
- the other camera 602 opens the folded display 604 Change the direction greatly. That is, as shown in (a), when the opening angle DAG of the display unit 604 is small, the top of the mobile phone 601 is captured.
- the xyz three-dimensional coordinate system for example, uses the focal position of its own camera 603 as the coordinate origin, and the relationship between the focal position of the other camera 602 determined by the structure of the mobile phone 601 and The captured images can be managed in the same xyz 3D coordinate system. It is obvious that the own camera 602 can also be used for photographing the light source. As described above, among the parameters of the illumination equation shown in FIG. 3, the light source vector can be calculated.
- the camera 401 includes a polarizing filter, so that the reflected light from the object incident on the subject photographing lens 408 is diffused and reflected by the method described in (Equation 4) and Fig. 5, for example. It can be separated into reflection components. If the diffuse reflection component is used, the surface normal vector N can be calculated by the photometric stereo method described in (Equation 9). As described in (Equation 8), the photometric stereo method requires three or more images with different light source orientations. Therefore, if the light source 403 is movable, (Equation 8) can be obtained by setting three or more positions of the light source 403 and performing photographing each time.
- the specular reflection component corresponds to ks ps in (Equation 1)
- the unknown parameters included in (Equation 2) include the specular reflection component ratio ks and the Fresnel coefficient.
- F Mic mouth facet distribution m and refractive index n.
- the surface normal vector N can be measured by using a range finder in addition to the configuration of FIG. [0111]
- the present invention can be executed in a wide variety of video devices such as widely used personal computers, server client systems, mobile phones with cameras, digital still cameras, video movie cameras, and televisions. No special equipment, operation or management is required. It should be noted that the device connection form and the internal configuration of the device, such as mounting on dedicated hardware or a combination of software and hardware, are not constrained.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006547855A JP3967367B2 (en) | 2004-12-07 | 2005-11-25 | Image conversion method, apparatus and program, texture mapping method, apparatus and program, and server client system |
US11/369,975 US7486837B2 (en) | 2004-12-07 | 2006-03-07 | Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004354274 | 2004-12-07 | ||
JP2004-354274 | 2004-12-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/369,975 Continuation US7486837B2 (en) | 2004-12-07 | 2006-03-07 | Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006061999A1 true WO2006061999A1 (en) | 2006-06-15 |
Family
ID=36577831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/021687 WO2006061999A1 (en) | 2004-12-07 | 2005-11-25 | Image conversion method, device, and program, texture mapping method, device, and program, and server-client system |
Country Status (4)
Country | Link |
---|---|
US (1) | US7486837B2 (en) |
JP (1) | JP3967367B2 (en) |
CN (1) | CN100573579C (en) |
WO (1) | WO2006061999A1 (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1768387B1 (en) * | 2005-09-22 | 2014-11-05 | Samsung Electronics Co., Ltd. | Image capturing apparatus with image compensation and method therefor |
WO2007139067A1 (en) * | 2006-05-29 | 2007-12-06 | Panasonic Corporation | Image high-resolution upgrading device, image high-resolution upgrading method, image high-resolution upgrading program and image high-resolution upgrading system |
US8953684B2 (en) * | 2007-05-16 | 2015-02-10 | Microsoft Corporation | Multiview coding with geometry-based disparity prediction |
GB2458927B (en) * | 2008-04-02 | 2012-11-14 | Eykona Technologies Ltd | 3D Imaging system |
US8463072B2 (en) * | 2008-08-29 | 2013-06-11 | Adobe Systems Incorporated | Determining characteristics of multiple light sources in a digital image |
JP5106432B2 (en) * | 2009-01-23 | 2012-12-26 | 株式会社東芝 | Image processing apparatus, method, and program |
TW201035910A (en) * | 2009-03-18 | 2010-10-01 | Novatek Microelectronics Corp | Method and apparatus for reducing spatial noise of images |
JP5273389B2 (en) * | 2009-09-08 | 2013-08-28 | 株式会社リコー | Image processing apparatus, image processing method, program, and recording medium |
CN102472620B (en) * | 2010-06-17 | 2016-03-02 | 松下电器产业株式会社 | Image processing apparatus and image processing method |
US8274656B2 (en) * | 2010-06-30 | 2012-09-25 | Luminex Corporation | Apparatus, system, and method for increasing measurement accuracy in a particle imaging device |
JP5742427B2 (en) * | 2011-04-25 | 2015-07-01 | 富士ゼロックス株式会社 | Image processing device |
US11379968B2 (en) * | 2017-12-08 | 2022-07-05 | Panasonic Intellectual Property Management Co., Ltd. | Inspection system, inspection method, program, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0326181A (en) * | 1989-06-23 | 1991-02-04 | Sony Corp | System for converting image |
JPH0737105A (en) * | 1993-07-19 | 1995-02-07 | Hitachi Ltd | Plotting method for outline and ridgeline |
JPH0944654A (en) * | 1995-07-26 | 1997-02-14 | Sony Corp | Image processing device and method therefor, and noise eliminating device and method therefor |
JP2000057378A (en) * | 1998-06-02 | 2000-02-25 | Sony Corp | Image processor, image processing method, medium, and device and method for extracting contour |
JP2000137833A (en) * | 1998-10-29 | 2000-05-16 | Mitsubishi Materials Corp | Device and method for track generation and recording medium thereof |
JP2003216973A (en) * | 2002-01-21 | 2003-07-31 | Canon Inc | Method, program, device and system for processing three- dimensional image |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5872864A (en) * | 1992-09-25 | 1999-02-16 | Olympus Optical Co., Ltd. | Image processing apparatus for performing adaptive data processing in accordance with kind of image |
US5704024A (en) * | 1995-07-20 | 1997-12-30 | Silicon Graphics, Inc. | Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map |
JPH1169372A (en) * | 1997-08-14 | 1999-03-09 | Fuji Photo Film Co Ltd | Image lightness control method, digital camera used for the same and image processor |
EP1008957A1 (en) | 1998-06-02 | 2000-06-14 | Sony Corporation | Image processing device and image processing method |
JP3921015B2 (en) * | 1999-09-24 | 2007-05-30 | 富士通株式会社 | Image analysis apparatus and method, and program recording medium |
US20020169805A1 (en) * | 2001-03-15 | 2002-11-14 | Imation Corp. | Web page color accuracy with image supervision |
US6753875B2 (en) * | 2001-08-03 | 2004-06-22 | Hewlett-Packard Development Company, L.P. | System and method for rendering a texture map utilizing an illumination modulation value |
JP4197858B2 (en) * | 2001-08-27 | 2008-12-17 | 富士通株式会社 | Image processing program |
US7034820B2 (en) * | 2001-12-03 | 2006-04-25 | Canon Kabushiki Kaisha | Method, apparatus and program for processing a three-dimensional image |
JP2003274427A (en) * | 2002-03-15 | 2003-09-26 | Canon Inc | Image processing apparatus, image processing system, image processing method, storage medium, and program |
JP2005149390A (en) | 2003-11-19 | 2005-06-09 | Fuji Photo Film Co Ltd | Image processing method and device |
CN1910623B (en) * | 2005-01-19 | 2011-04-20 | 松下电器产业株式会社 | Image conversion method, texture mapping method, image conversion device, server-client system |
-
2005
- 2005-11-25 JP JP2006547855A patent/JP3967367B2/en not_active Expired - Fee Related
- 2005-11-25 CN CN200580034775.XA patent/CN100573579C/en not_active Expired - Fee Related
- 2005-11-25 WO PCT/JP2005/021687 patent/WO2006061999A1/en not_active Application Discontinuation
-
2006
- 2006-03-07 US US11/369,975 patent/US7486837B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0326181A (en) * | 1989-06-23 | 1991-02-04 | Sony Corp | System for converting image |
JPH0737105A (en) * | 1993-07-19 | 1995-02-07 | Hitachi Ltd | Plotting method for outline and ridgeline |
JPH0944654A (en) * | 1995-07-26 | 1997-02-14 | Sony Corp | Image processing device and method therefor, and noise eliminating device and method therefor |
JP2000057378A (en) * | 1998-06-02 | 2000-02-25 | Sony Corp | Image processor, image processing method, medium, and device and method for extracting contour |
JP2000137833A (en) * | 1998-10-29 | 2000-05-16 | Mitsubishi Materials Corp | Device and method for track generation and recording medium thereof |
JP2003216973A (en) * | 2002-01-21 | 2003-07-31 | Canon Inc | Method, program, device and system for processing three- dimensional image |
Also Published As
Publication number | Publication date |
---|---|
CN100573579C (en) | 2009-12-23 |
US7486837B2 (en) | 2009-02-03 |
US20060176520A1 (en) | 2006-08-10 |
CN101040295A (en) | 2007-09-19 |
JPWO2006061999A1 (en) | 2008-06-05 |
JP3967367B2 (en) | 2007-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3967367B2 (en) | Image conversion method, apparatus and program, texture mapping method, apparatus and program, and server client system | |
JP3996630B2 (en) | Image conversion method, texture mapping method, image conversion apparatus, server client system, image conversion program, shadow recognition method, and shadow recognition apparatus | |
US8131116B2 (en) | Image processing device, image processing method and image processing program | |
US10887519B2 (en) | Method, system and apparatus for stabilising frames of a captured video sequence | |
JP4435867B2 (en) | Image processing apparatus, method, computer program, and viewpoint conversion image generation apparatus for generating normal line information | |
US7688363B2 (en) | Super-resolution device, super-resolution method, super-resolution program, and super-resolution system | |
US20050219642A1 (en) | Imaging system, image data stream creation apparatus, image generation apparatus, image data stream generation apparatus, and image data stream generation system | |
EP2120007A1 (en) | Image processing system, method, device and image format | |
EP2061005A2 (en) | Device and method for estimating depth map, and method for generating intermediate image and method for encoding multi-view video using the same | |
US20180174326A1 (en) | Method, System and Apparatus for Determining Alignment Data | |
US7348990B2 (en) | Multi-dimensional texture drawing apparatus, compressing apparatus, drawing system, drawing method, and drawing program | |
JPWO2006033257A1 (en) | Image conversion method, image conversion apparatus, server client system, portable device, and program | |
WO2007108041A1 (en) | Video image converting method, video image converting device, server client system, mobile apparatus, and program | |
Dumont et al. | A Prototype for Practical Eye-Gaze Corrected Video Chat on Graphics Hardware. | |
KR102146839B1 (en) | System and method for building real-time virtual reality | |
Farin et al. | Enabling arbitrary rotational camera motion using multisprites with minimum coding cost | |
JP2014039126A (en) | Image processing device, image processing method, and program | |
Farin et al. | Minimizing MPEG-4 sprite coding cost using multi-sprites | |
WO2019008233A1 (en) | A method and apparatus for encoding media content | |
Georgiev et al. | A general framework for depth compression and multi-sensor fusion in asymmetric view-plus-depth 3D representation | |
Li et al. | Image panoramic mosaicing with global and local registration | |
Lee | Low complexity mosaicking and up-sampling techniques for high resolution video display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 11369975 Country of ref document: US |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 11369975 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006547855 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580034775.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05809130 Country of ref document: EP Kind code of ref document: A1 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 5809130 Country of ref document: EP |