WO2006061999A1 - Image conversion method, device, and program, texture mapping method, device, and program, and server-client system - Google Patents

Image conversion method, device, and program, texture mapping method, device, and program, and server-client system Download PDF

Info

Publication number
WO2006061999A1
WO2006061999A1 PCT/JP2005/021687 JP2005021687W WO2006061999A1 WO 2006061999 A1 WO2006061999 A1 WO 2006061999A1 JP 2005021687 W JP2005021687 W JP 2005021687W WO 2006061999 A1 WO2006061999 A1 WO 2006061999A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parameter
conversion
pixel
parameters
Prior art date
Application number
PCT/JP2005/021687
Other languages
French (fr)
Japanese (ja)
Inventor
Hideto Motomura
Katsuhiro Kanamori
Kenji Kondo
Satoshi Sato
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Priority to JP2006547855A priority Critical patent/JP3967367B2/en
Priority to US11/369,975 priority patent/US7486837B2/en
Publication of WO2006061999A1 publication Critical patent/WO2006061999A1/en

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • Image conversion method apparatus and program
  • texture mapping method apparatus and program
  • server client system server client system
  • the present invention relates to an image processing technique, and more particularly to a technique for realizing image conversion such as enlargement or reduction, image compression, and texture mapping.
  • Arbitrary image devices can be connected by the digital network of image devices and networks, and the degree of freedom of image exchange is increasing.
  • an environment has been established in which users can freely handle images without being restricted by differences in systems. For example, users can output images taken with a digital still camera to a printer, publish them on a network, or view them on a home TV.
  • Scalability refers to the ability to extract standard TV image data in some cases and HD TV image data in other cases from a single bit 'stream, and the degree of freedom to extract various image sizes.
  • transmission is performed for each image format. Less transmission capacity is needed without preparing a route.
  • Texture mapping is a technique that expresses the pattern and texture of an object surface by attaching a 2D image to the surface of the 3D object formed in the computer.
  • processing such as enlargement, reduction, deformation, and rotation on the 2D image (see Non-Patent Document 1).
  • Non-Patent Document 1 in order to newly generate image data that does not exist at the time of sampling, luminance values are interpolated by a bilinear method, a bicubic method, or the like (see Non-Patent Document 1). . Since interpolation can generate only intermediate values of sampling data, sharpness such as edges tends to deteriorate. Therefore, a technique is disclosed in which an interpolated image is used as an initial enlarged image, and thereafter, an edge portion is extracted to emphasize only the edge (see Non-Patent Document 2 and Non-Patent Document 3). In particular, Non-Patent Document 3 devised a technique for selectively performing edge enhancement according to the sharpness of an edge by introducing a multi-resolution expression and a Lipschitz index.
  • Patent Document 1 JP 2005-149390 A
  • Non-Patent Document 1 Shinya Araya, “Clear 3D Computer Graphics”, Kyoritsu Shuppan, pp 144—145, 25 September 2003,
  • Patent Document 2 H. Greenspan, CH Anderson, “Image enhanced by non-linear extrapolation in frequect space”, SPIE Vol. 2182 Image and Video Processing II, 1994
  • Non-Patent Document 3 Nakashige et al., “Multi-scale Image Resolution on Luminance Gradient Planes, ”IEICE Transactions D— ⁇ Vol. J81 -D-II No. 10 pp. 2249— 2258 Oct 1998
  • Non-Patent Document 4 Multimedia Communication Study Group, “Point Illustrated Broadband + Mono Standard MPEG Textbook”, ASCII, pp. 25-29, February 11, 2003
  • Non-Patent Document 5 Image Processing Handbook Editorial Committee, “Image Processing Handbook”, Shogodo, pp. 393, June 1987 80
  • Non-Patent Document 6 Shinji Umeyama, “Separation of Diffuse Z Specular Reflection Components from Object Appearance Using Multiple Observations and Stochastic Independence through Polarization Filters”, Image Recognition 'Understanding Symposium 2002, pp 1-469-pp. 1-476, 2002
  • edge enhancement of interpolated images during image enlargement and smoothing during image reduction are empirical methods, and no clear noise countermeasures are taken, so image quality after image conversion cannot be guaranteed. There are also problems.
  • an object of the present invention is to make the image quality more stable in image conversion, image compression, and texture mapping by making it less susceptible to noise than in the past.
  • the present invention as an image conversion method, for each pixel of the first image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are acquired, and the value of the parameter is determined for each parameter. Homogeneous regions with similar pixel power are specified, and for each parameter, the parameter conversion processing is performed for each specified homogeneous region according to the content of the image conversion, and each parameter after the conversion processing is used. The brightness of each pixel of the second image is obtained.
  • a plurality of parameters constituting an illumination equation that gives luminance are respectively acquired for the first image to be subjected to image conversion.
  • the parameters referred to here are, for example, the optical characteristics of the subject, environmental conditions, the surface normal of the subject, and the like.
  • a homogeneous region is specified for each meter, and the parameter conversion process is performed for each specified homogeneous region according to the content of the image conversion.
  • the luminance of each pixel of the second image after image conversion is obtained using each parameter after conversion processing.
  • the luminance is decomposed into illumination equation parameters, and image conversion is performed using the correlation between pixels for each parameter.
  • the illumination equation parameters are highly independent, such as surface normals and optical properties. For this reason, when processing is performed for each parameter, it is easier to grasp the peculiarities of noise than when processing is performed using the luminance given as the integral value of the norameter.
  • the optical characteristics can be decomposed into diffuse reflection components and specular reflection components, which are highly independent factors, so that the peculiarities of noise can be emphasized.
  • the homogeneous region is specified based on the similarity of the illumination equation parameters, which are physical characteristics of the subject, so that it is defined with physical support.
  • the edge portion is stored as a boundary condition between the homogeneous regions. Therefore, it is possible to realize image conversion with stable image quality while preserving the sharpness of edges and texture. Also, the problem of noise mixing that does not require the direct detection of edges as in the prior art does not occur.
  • the image conversion method of the present invention when image enlargement is performed, a process for increasing the density of the parameters may be performed as the conversion process for each parameter.
  • the homogeneous region is defined with physical support. This Therefore, compared to the conventional empirical technique for edge enhancement of the initial enlarged image obtained by interpolation, the present invention for increasing the density of parameters for each homogeneous region is an objective one. The image quality can be further stabilized.
  • a process for reducing the density of the parameter may be performed.
  • the present invention for reducing the parameters for each homogeneous region is objective compared to the empirical method using a low-pass filter as in the prior art. The image quality can be made more stable.
  • an image compression method for each pixel of an image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are respectively acquired, and the value of the parameter is similar for each parameter.
  • a homogenous region having a pixel power is specified, and for each parameter, compression coding of the parameter is performed for each specified homogenous region.
  • a plurality of parameters constituting an illumination equation for giving brightness are acquired for each image to be compressed. Then, a homogeneous region is specified for each parameter, and the parameter is compressed and encoded for each specified homogeneous region.
  • the correlation between neighboring pixels is high with respect to the illumination equation parameter, so that the compression efficiency can be improved over the image compression based on the luminance value.
  • the edge part is saved as a boundary condition between homogeneous regions. Therefore, it is possible to realize image compression with high compression efficiency while preserving the sharpness of the edges and the texture.
  • the present invention provides a texture mapping method in which a preprocessing for pasting a texture image to an object of a three-dimensional CG model is performed, and brightness is given to each pixel of the texture image pasted to the object.
  • a plurality of parameters constituting the lighting equation are obtained, a homogeneous region consisting of pixels with similar parameter values is identified for each parameter, and a predetermined image conversion is performed for each parameter.
  • the parameter conversion processing is performed for each identified homogeneous region, and the luminance of each pixel of the object image is obtained using each parameter after the conversion processing.
  • FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters.
  • FIG. 3 is a conceptual diagram showing a geometric condition which is a premise of an illumination equation.
  • FIG. 4 is a diagram for explaining an example of a surface normal vector measurement method.
  • FIG. 5 is a diagram for explaining an example of a technique for separating diffuse reflection and specular reflection.
  • FIG. 6 is a diagram for explaining a method of acquiring illumination equation parameters with reference to learning data.
  • FIG. 7 is a diagram showing a pattern for determining a homogeneous region.
  • FIG. 8 is a diagram showing an example of a unit area scanning method.
  • FIG. 9 is a diagram showing an example of noise removal.
  • FIG. 10 is a diagram showing processing for increasing the density of parameters for image enlargement.
  • FIG. 11 is a diagram showing processing for reducing parameters for image reduction.
  • FIG. 12 is a conceptual diagram showing parameter conversion processing for image compression in the second embodiment of the present invention.
  • FIG. 13 is a diagram for explaining a third embodiment of the present invention, and shows a flow of a rendering process.
  • FIG. 14 is a diagram illustrating a first configuration example that implements the present invention, and illustrates a configuration using a non-sonal computer.
  • FIG. 15 is a second configuration example for realizing the present invention, and is a diagram showing a configuration using a server client system.
  • FIG. 16 is a third configuration example for realizing the present invention, and is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
  • FIG. 17 is a diagram showing the relationship between the position of the light source and the image taken with the wide-angle lens.
  • FIG. 18 is a diagram showing a third configuration example for realizing the present invention, showing a configuration using a folding mobile phone.
  • a predetermined value that gives luminance to each pixel of the first image is provided.
  • a third step and a fourth step for obtaining the luminance of each pixel of the second image using each parameter after the conversion process in the third step are provided.
  • the predetermined image conversion is image enlargement
  • the conversion process in the third step is a process for increasing the density of the parameter. I will provide a.
  • the predetermined image conversion is image reduction
  • the conversion process in the third step is a process for reducing the density of the parameter.
  • the image conversion method according to the first aspect, wherein the acquisition of the plurality of parameters in the first step is performed by measuring subject force or estimating the first image force.
  • the image conversion method according to the first aspect wherein, in the second step, the degree of similarity is evaluated using variances of values of the parameters in a plurality of pixels.
  • the image conversion method according to the first aspect wherein the second step includes a process of performing noise removal in the specified homogeneous region.
  • a preprocessing step of pasting a texture image onto an object of a three-dimensional CG model, and a luminance for each pixel of the texture image pasted on the object A first step of acquiring each of a plurality of parameters constituting a predetermined illumination equation for giving a predetermined illumination equation, and a second step of specifying a homogeneous region having a pixel force with a similar value of the parameter for each of the parameters, and A third step of performing a conversion process of the parameter for each homogeneous region identified in the second step according to the content of predetermined image conversion for each parameter; and the third step And a fourth step of obtaining the luminance of each pixel of the image of the object using each parameter after the conversion processing in To provide things.
  • a predetermined value that gives brightness to each pixel of the first image is provided.
  • a parameter acquisition unit for acquiring a plurality of parameters constituting the illumination equation; For each parameter, a homogeneous region specifying unit that specifies a homogeneous region having a similar pixel value and also having a pixel power, and for each of the parameters, the homogeneous region specifying unit according to the content of the predetermined image conversion For each homogenous region identified by the parameter conversion unit that performs the conversion process of the parameter, and the luminance for obtaining the luminance of each pixel of the second image using each parameter after the conversion process by the parameter conversion unit Provided with a calculator.
  • a pre-processing unit that pastes a texture image on an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted on the object
  • a parameter acquisition unit that acquires each of a plurality of parameters constituting a given illumination equation to be given, and a homogeneous region specifying unit that specifies a homogeneous region composed of pixels having similar parameter values for each parameter!
  • a parameter conversion unit that performs conversion processing of the parameter for each homogeneous region specified by the homogeneous region specification unit according to the content of predetermined image conversion for each parameter, and conversion processing by the parameter conversion unit
  • a luminance calculation unit that obtains the luminance of each pixel of the image of the object by using each parameter later.
  • the server client system that performs image conversion includes the server having the parameter acquisition unit, the homogeneous region specifying unit, and the parameter conversion unit of the eighth aspect, and the luminance calculation unit of the eighth aspect. And the client provides the server with an instruction to change the contents of the image.
  • each pixel of the first image is used as a program for causing a computer to execute a method of performing a predetermined image conversion on the first image and generating a second image.
  • a homogeneous region composed of pixels having similar values for each of the plurality of parameters constituting a predetermined illumination equation that gives brightness.
  • a pre-processing step of pasting a texture image onto an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted onto the object A first step of acquiring a plurality of parameters constituting a given illumination equation to be given, and a second step of specifying a homogeneous region composed of pixels having similar parameter values for each of the parameters; In each of the parameters, a third step of performing conversion processing of the parameter for each homogeneous region specified in the second step according to the content of predetermined image conversion, and in the third step A fourth step of determining the luminance of each pixel of the image of the object using each parameter after the conversion process; It provides that causes a computer to execute.
  • FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention. Note that the image conversion method according to the present embodiment can be realized by causing a computer to execute a program for realizing the method.
  • Equation 1 and (Equation 2) are used as the illumination equations that give the luminance, and the homogeneous region is specified for each of a plurality of parameters constituting this equation. Then, for each homogeneous region, the parameter conversion process is performed to realize the predetermined image conversion.
  • la is the brightness of the ambient light
  • a is the reflectance of the ambient light
  • Ii is the brightness of the illumination
  • the vector N is the surface normal vector
  • the vector L is the light source vector indicating the light source direction
  • d co is the light source
  • the solid angle is the bidirectional reflectance of the diffuse reflection component
  • ps is the bidirectional reflectance of the specular reflection component
  • F ⁇ is the Fresnel coefficient
  • m is the microfacet distribution
  • n is the refractive index
  • Kd is the diffuse reflection component ratio
  • ks is the specular reflection component ratio
  • kd + ks l.
  • Vector H is a half vector located between light source vector L and viewpoint vector V
  • is the angle between surface normal vector ⁇ and viewpoint vector V
  • light source vector L, surface normal vector N, viewpoint vector V Force can also be calculated.
  • FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters.
  • (a) is a graph showing the luminance distribution of the image shown in (b)
  • (c) to (f) are the bi-directional reflectance pd of the diffuse reflection component and the specular reflection component of the illumination equation parameters.
  • 4 is a graph showing the distribution of bidirectional reflectance ps, diffuse reflection component ratio kd, and surface normal vector N, respectively.
  • the horizontal axis is the spatial position
  • the vertical axis is the brightness or the value of each parameter.
  • Object XI is a luminance distribution that brightens from left to right
  • object X2 is a regular, random luminance distribution
  • object X3 is a luminance distribution with no and illite in the center
  • object X4 is in all space positions. It has an isoluminance distribution.
  • the bidirectional reflectance pd of the diffuse reflection component, the diffuse reflection component ratio kd, and the surface normal vector N force have homogeneous regions (AAl, AC1, AD1), respectively. Only the bidirectional reflectance ps of the surface reflection component changes. This change in pS causes a change in brightness.
  • the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance ps of the specular reflection component, and the surface normal vector N force have homogeneous regions (AA2, AB1, AD2), respectively, and the diffuse reflection component ratio kd Only has changed.
  • the diffuse reflection component ratio kd has a random change with no regularity, and the brightness also changes randomly to form a fine texture.
  • the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance / 0 s of the specular reflection component, and the diffuse reflection component ratio kd have homogeneous regions (AA2, AB2, AC2). Only the surface normal vector N changes. This N change force causes a change in brightness.
  • each parameter; 0 (1, s, kd, N has the same homogeneous area (AA3, AB3, AC3, AD3), so the luminance value is constant.
  • the diffuse reflection component ratio kd The diffuse reflectance component is mainly high (Fig. 2 (e)), and the bidirectional reflectance pd of the diffuse reflection component is low! (Fig. 2 (c)), so the luminance value in the range of the object X4 is low! ,.
  • Equation 1 Force If the parameter force constituting the illumination equation changes as much as the component force, the luminance changes. Therefore, it can be understood that edge detection is more stable for each parameter than for luminance change. In the present embodiment, since different homogeneous regions are adjacent to each other, the edge can be obtained more stably for parameters for which the homogeneous region is obtained more stably. Therefore, by converting each parameter for each homogeneous region, image conversion can be executed while preserving the sharpness and texture of the edges.
  • initial setting is performed in step SOO.
  • the first image to be subjected to image conversion is acquired, and the threshold value THEPR for homogeneous region determination, the threshold value THMEPR for homogeneous region merge determination, and the threshold value THN for noise determination are set. How to use these threshold values will be described later.
  • step S10 for each pixel of the first image, a predetermined A plurality of parameters constituting the illumination equation are respectively acquired.
  • the illumination equations of (Equation 1) and (Equation 2) described above are used.
  • the ambient light luminance Ia, the ambient light reflectance pa, the light source luminance Ii, the light source vector L, and the solid angle d co of the light source are called environmental conditions, and the bidirectional reflectance pd of the diffuse reflection component, specular reflection
  • the bidirectional reflectance ps of the component, the diffuse reflection component ratio kd, and the specular reflection component ratio ks are called optical characteristics. These give the luminance value Iv of the reflected light in the direction of the viewpoint according to the illumination equation shown in (Equation 1).
  • FIG. 3 is a conceptual diagram showing the geometric conditions assumed by (Equation 1). As shown in Fig. 3, light from the light source is incident on the current point of interest P on the object surface SF with irradiance Ii (NL) d ⁇ , diffuse reflection component is kd pd, specular reflection component Is reflected by ks ps. Ambient light is light that enters the current attention point P on the object surface SF from the surroundings by multiple reflections, etc., and hits the bias component of the luminance Iv in the viewing direction (vector V).
  • Each parameter of (Equation 1) can be obtained by measurement from a subject or estimation from a given captured image.
  • the surface normal vector N can be measured by a range finder or the like using the principle of triangulation (see, for example, Non-Patent Document 5).
  • the principle of triangulation is that the triangle is uniquely determined when the angles of one side and both ends of the triangle are determined.
  • two points A separated by a known distance 1 , B is the angle at which point P is viewed as ⁇ and ⁇ , respectively.
  • the coordinate values (x, y) of point ⁇ are given by
  • Non-Patent Document 6 discloses a technique that utilizes the property that a specular reflection component is polarized.
  • the electric field component parallel to the light incident / reflecting surface and the electric field component perpendicular to the light reflection surface are usually Since the channel coefficients are different, the reflected light is polarized.
  • the specular reflection component is generally polarized, but diffuse reflection is irregular reflection, and thus has no polarization. Therefore, as shown in FIG.
  • the intensity of the transmitted light RRP is the intensity of the component parallel to the polarization axis PFA of the polarizing filter PF in the reflected light RR. Therefore, when the specular reflection component from the object surface SF is observed while rotating the polarizing filter PF, the intensity of the transmitted light RRP is the angle between the polarizing axis PFA of the polarizing filter PF and the polarizing plane SP P of the specular reflection. It changes according to ⁇ and is given by the following equation.
  • I people ⁇ , d +-(F v ⁇ ',) + F P ( ⁇ ⁇ )-(( ⁇ ',)-F p ( ⁇ ',)) cos 2 ⁇ )
  • Ld is the luminance of the diffuse reflection component
  • Ls is the luminance of the specular reflection component
  • ⁇ 'i is the incident angle of the light on the minute reflection surface
  • FP is the Fresnel coefficient of the parallel electric field component to the insulator
  • FV is It is the Fresnel coefficient of the vertical electric field component for the insulator.
  • the correspondence relationship between the spatial response characteristic and the illumination equation parameter is learned in advance, and the learning data is referred to when the parameter is acquired.
  • the method to do is effective.
  • the relationship between the image feature vector and the illumination equation parameter is learned in advance, and an image feature vector database 502 and an illumination equation parameter database 503 are prepared.
  • the input image ⁇ as the first image is converted into an input image feature vector IINFV by the image feature analysis processing 501.
  • the spatial response characteristic is obtained by, for example, wavelet transformation.
  • the image feature vector database 502 selects the image feature vector closest to the input image feature vector IINFV, and outputs the input image feature vector number IINFVN.
  • the illumination equation parameter database 503 receives the input image feature vector number IINFVN and outputs the illumination equation parameter corresponding to this as the input image illumination equation parameter IINLEP.
  • the present invention can be applied to any method that does not limit the method of measuring and estimating the parameters of the illumination equation.
  • the surface normal vector N is It is possible to estimate by using the generalized inverse matrix (Equation 9), which can obtain 3 or more image powers with different light source directions by the Kustero method (RJWoodham, "Photometric method for determining surface orientation” rrom multiple images, Optical Engineenn g 19, pp.l39-144 1980).
  • the vector x is a vector obtained by collecting the surface normal vector p dN having the reflectance pd in the length for the number of times of photographing
  • the matrix L is a light source matrix for collecting the number of light source vectors L for the number of times of photographing
  • the vector V is a vector in which the luminance values Iv of reflected light in multiple viewpoint directions are collected for the number of times of shooting.
  • the object surface is assumed to be a uniform diffuse surface (Lambertian surface), and the light source is assumed to be a point light source at infinity.
  • the method of separating diffuse reflection and specular reflection is different from the method shown in Fig.
  • step S20 for each parameter, a homogeneous region having a similar pixel value and a similar pixel value is specified.
  • the similarity of the meters is evaluated by the variance of the parameters in a plurality of pixel areas.
  • this variance value is smaller than the homogeneous region determination threshold THEPR set in step S00, the region of the plurality of pixels is determined to be a homogeneous region, and on the other hand, it is greater than or equal to the homogeneous region determination threshold THEPR.
  • it is determined that the region is not homogeneous. In this case, it is presumed that all the pixels in the area are different from each other or that different homogeneous areas are included.
  • the homogeneous region determination threshold THEPR is set to 0.5 degrees, for example, and when the variance value is smaller than 0.5 degrees, it is determined as a homogeneous area, and when it is greater than or equal to 0.5 degrees, it is determined as heterogeneous.
  • the The diffuse reflection component ratio kd is a ratio and takes a value from 0 to 1. Therefore, the homogeneous region determination threshold THEPR is set to, for example, 0.01. When the variance value is smaller than 0.01, it is determined as a homogeneous region, and when the variance value is greater than or equal to 0.01, it is determined as a heterogeneous region.
  • the setting of the area of the plurality of pixels for determining the similarity of the parameters is arbitrary, but here, a unit area having 5 pixels in the vertical direction and 5 pixels in the horizontal direction is used (S21).
  • a unit area having 5 pixels in the vertical direction and 5 pixels in the horizontal direction is used (S21).
  • 28 types of determinations from P01 to P28 as shown in FIG. 7 are performed, homogeneous regions can be extracted for all pattern shapes in the unit region UA.
  • all the pixels included in the homogeneous area are adjacent to each other, and (2) the central pixel of the unit area UA must be included. Judgment is made in two stages for all 28 patterns. First, in the central area CA of 3 ⁇ 3, it is determined whether or not three gray pixels out of nine pixels are homogeneous.
  • the pattern determined to be homogeneous it is determined whether or not the pattern is homogeneous including the hatched pixels outside the central area CA.
  • the sum of them is taken as the homogeneous region.
  • Step S22 When a homogeneous region is newly recognized (Yes in S22), the homogeneous region data that should follow the new homogeneous region is updated (S23). Steps S21 to S23 are repeatedly executed until the determination is completed for all unit areas (S24). As shown in Fig. 8, if the unit area UA of 5 pixels x 5 pixels is scanned so that one line overlaps horizontally and vertically, the homogeneous regions generated in the unit area UA are joined together, and the image It can be expanded to the whole.
  • step S25 the similarity between a plurality of homogeneous regions recognized in adjacent unit regions is evaluated, and similar homogeneous regions are merged.
  • the method of evaluating the similarity of homogeneous regions is arbitrary. For example, an average value of parameter values may be obtained for each unit region, and a determination may be made using the difference value of the average values. In other words, if the difference value is smaller than the homogeneous region merge determination threshold THMEPR set in step SO 0, the homogeneous region is identical. To merge.
  • step S26 it is determined whether or not there is noise in the homogeneous region. For example, this determination is based on the average value of the parameter values of all pixels in the homogeneous region, and the difference between the parameter value of a pixel and this average value is larger than the noise determination threshold TH N set in step SOO. Sometimes this is determined as noise.
  • the noise determination threshold THN is set to 30 degrees, for example, and when the difference from the average value is larger than 30 degrees, it is determined as noise.
  • the noise determination threshold THN is set to 0.2, for example, and when the difference from the average value is larger than 0.2, it is determined to be noise.
  • Fig. 9 shows an example of noise removal.
  • Gray pixels are homogeneous regions
  • PI and P2 are pixels that are determined to be noise.
  • the average value of the parameter values of the pixels included in the homogeneous region is obtained from the 8 pixels around the pixel determined to be noise, and this is replaced with noise.
  • pixel P1 since all the surrounding 8 pixels belong to the homogeneous region, replace with the average value of the parameter values of all the surrounding 8 pixels.
  • pixel P2 two of the eight surrounding pixels belong to the homogeneous region, so they are replaced with the average value of these two pixels.
  • the noise removal method described here is merely an example, and any method may be used.
  • step S20 pixels that do not fall within the homogeneous region form edges.
  • step S30 as the third step, for each parameter, the parameter is changed for each homogeneous region identified in step S20 according to the content of the predetermined image conversion. Perform the conversion process.
  • FIG. 10 is a conceptual diagram showing processing when image enlargement is performed as image conversion.
  • Fig. 10 when enlarging the image, the parameters are made dense within the homogeneous region.
  • Figure 10 (a) shows the distribution of the parameters before conversion.
  • the homogeneous region AE1 where the average parameter value is P1 is adjacent to the homogeneous region AE2 where the average parameter value is P2.
  • the luminance difference edge of the pixels SI and S 2 located at the boundary between the homogeneous regions AE1 and AE2 is formed.
  • Fig. 10 (b) In order to enlarge the image of Fig. 10 (a), for example, by a factor of 2, it is shown in Fig. 10 (b).
  • a white circle pixel may be inserted between each black circle pixel.
  • the parameter value of the white circle pixel is, for example, the parameter value of the adjacent black circle pixel.
  • a new pixel S3 may be generated by copying either parameter value as it is between the pixels SI and S2.
  • the parameter value of the pixel S1 is copied to the pixel S3, and the luminance difference between the pixels S2 and S3 is made to match the luminance difference between the pixels SI and S2 in FIG. 10 (a). This preserves the edges.
  • FIG. 11 is a conceptual diagram showing a process when image reduction is performed as image conversion.
  • the parameters are reduced in the homogeneous region.
  • the density reduction method is arbitrary, in Fig. 11, the average value of the parameter values of the surrounding pixels is used as an example.
  • Figure 11 (a) shows the distribution of the parameters before conversion.
  • the homogeneous region AF1 where the average parameter value is P1 is adjacent to the homogeneous region AF2 where the average parameter value is P2.
  • the luminance difference edge of the pixels S6 and S7 located at the boundary between the homogeneous regions AF1 and AF2 is formed.
  • the average value of the parameter values in the pixel group SG1 is set as the parameter value of the pixel S4, and the average value of the parameter values in the pixel group SG2 is set as the parameter value of the pixel S5, thereby realizing a reduction in density.
  • the change in the parameter value of the reduced image is smoothed.
  • the luminance difference between the pixels S6 and S7 which are the edges in FIG. 11A is stored as the luminance difference between the pixels S7 and S8 in FIG. That is, the parameter value of pixel S8 is copied from pixel S6.
  • step S40 as the fourth step, the luminance of each pixel of the second image after the predetermined image conversion using each parameter after the conversion processing in step S30 Ask for.
  • each parameter is given to the lighting equation of (Equation 1), the reflected light intensity Iv is calculated for each pixel.
  • the luminance is decomposed into illumination equation parameters, and image conversion such as image enlargement or image reduction is performed using the correlation between pixels for each parameter. That is, since image conversion is executed for each parameter for each homogeneous region, the edge portion is stored as a boundary condition between the homogeneous regions. In addition, since the homogeneous region is specified based on the similarity of the illumination equation parameters that are physical characteristics of the subject, it is determined with physical support. Therefore, image conversion with stable image quality can be realized while preserving the sharpness of the edge and the texture.
  • a parameter acquisition unit that executes step S10 a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a luminance calculation unit that executes step S40 are provided.
  • An image conversion apparatus may be configured.
  • the illumination equation used in the present invention is not limited to that shown in the present embodiment.
  • the following may be used.
  • Equation 5 is for a diffusely reflecting object, and has 6 parameters.
  • Iv, a represents the light intensity in the direction of the peripheral force line of sight.
  • Equation 6 does not distinguish between diffuse reflection and specular reflection, and there are five parameters.
  • Equation 7) does not consider the reflectivity, and there are two parameters.
  • Iv, i represents the pixel power of interest and the light intensity in the line-of-sight direction.
  • step S30 processing for compression-encoding each parameter is performed for image compression.
  • the compressed image data is transferred or recorded without executing step S40.
  • each parameter is decoded and the luminance of each pixel is calculated.
  • the image conversion method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method.
  • an image compression apparatus including a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, and a parameter compression unit that executes step S30 may be configured.
  • FIG. 12 is a conceptual diagram showing parameter conversion processing in the present embodiment.
  • the white circles represent the parameter values of the pixels belonging to the homogeneous regions AG1 to AG3, and the hatched circles represent the parameter values of the pixels not belonging to the homogeneous region.
  • the parameter values are almost equal in the homogeneous regions AG1 to AG3, and therefore the amount of information related to the parameter values is almost integrated into the average value. Therefore, in each homogeneous region AG1 to AG3, the average value of the parameter value and the difference between the parameter value and the average value of each pixel are encoded, and a small amount of code is assigned to the difference. To do. As a result, it is possible to perform compression coding of the meter value with a small amount of code without impairing the image quality.
  • the code type TP1 is declared (here, "difference from the average value"), and then the average
  • Dl and the difference D2 from the average value at each pixel are followed by D2, followed by the separator signal SG1.
  • a special code may be assigned as the code type so that the separation can be recognized. If the difference D2 is so small that it can be ignored, apply the run-length code.
  • the homogeneous region occupies most of the range in the image, it is not a problem to code the parameter values of pixels that do not belong to the homogeneous region as they are.
  • the homogeneous regions AG2 and AG3 declare "difference from the average value" as the encoding types TP3 and TP4 as in the homogeneous region AG1.
  • a higher correlation than the luminance value can be expected by decomposing the luminance value into parameters constituting the luminance value and obtaining the correlation with the neighboring pixels, thus improving the compression efficiency. Can be made.
  • compression coding is performed for each homogeneous region, it is possible to achieve a higher compression ratio than the luminance value base while preserving sharpness and texture.
  • the image conversion method as described above is applied to texture mapping in computer graph status.
  • FIG. 13 is a flowchart showing the main flow of the rendering process.
  • the rendering process is a process of converting a three-dimensional model generated in a computer into two-dimensional image data in computer graphics (see, for example, pp. 79 of Non-Patent Document 1). As shown in Figure 13, the rendering process is the main step S101, coordinate transformation S102, hidden surface removal S 103, shading and shadowing S 104, texture mapping S105, and viewport transformation S 106 .
  • step S101 when the viewpoint VA and the light source LS are set, the appearance is determined.
  • step S102 the objects managed in the local coordinate system are grouped into a regular coordinate system, and in step S103, the hidden surface portion that cannot be seen from the viewpoint is deleted. Then, in step S104, the light source LS force is also calculated as how light strikes the objects OA and OB, and a shade and a shadow are generated.
  • step S 105 texture mapping is performed to generate textures TA and TB for the objects OA and OB.
  • the texture is generally acquired as image data.
  • the texture image TIA is deformed according to the shape of the object OA and is synthesized on the object OA.
  • the texture image TIB is matched to the shape of the object OB. Then transform it and compose it on the object OB.
  • the image conversion as described above is applied in this texture mapping. That is, first, pre-processing for pasting the texture images TIA and TIB to the objects OA and OB of the 3D CG model is performed. Then, processing is performed according to the flow of FIG. In step S10, each of the texture images TIA and TIB pasted to the objects OA and OB using the optical parameters of the two-dimensional texture images TIA and TIB and the surface normal vectors of the objects OA and OB. The parameter is acquired for the pixel. The subsequent processing is the same as in the first embodiment. Note that the texture mapping method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method.
  • a preprocessing unit that performs the above-described preprocessing, a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a step A texture mapping device including a luminance calculation unit that executes S40 may be configured.
  • step S106 viewport conversion is performed to generate a two-dimensional image having an image size matching the displayed screen SCN or window WND.
  • the rendering processing needs to be executed because the viewpoint and the position of the light source change, and the rendering processing is frequently repeated in an interactive system such as a game device.
  • texture mapping texture data to be pasted on the object surface is usually prepared as an image. Therefore, whenever the viewpoint or light source changes, it is necessary to convert the texture data by enlarging, reducing, rotating, or changing colors.
  • FIG. 14 is a diagram showing a first configuration example, which is related to the present invention using a personal computer.
  • 2 is an example of a configuration for performing image conversion.
  • the resolution of the camera 101 is lower than the resolution of the display 102.
  • an enlarged image is created by an image conversion program loaded in the main memory 103.
  • the low resolution image captured by the camera 101 is recorded in the image memory 104.
  • An image feature vector database 502 and an illumination equation parameter database 503 as shown in FIG. 6 are prepared in advance in the external storage device 105, and the image conversion program capability of the main memory 103 can be referred to.
  • the processing by the image conversion program is the same as in the first embodiment, and a homogeneous region is determined for each illumination equation parameter, and densification is performed in the homogeneous region. That is, a low-resolution image in the image memory 104 is read via the memory bus 106, enlarged in accordance with the resolution of the display 102, and transferred again to the video memory 107 via the memory bus 106. The enlarged image transferred to the video memory 107 is displayed on the display 102.
  • the present invention can take various configurations other than those constrained by the configuration of FIG.
  • the illumination equation parameters may be measured directly from the subject by a measuring instrument.
  • the image feature vector database 502 and the illumination equation parameter database 503 of the external storage device 105 are not necessary.
  • acquiring low resolution images from the network 108 does not help. It is also possible to store the texture data in the external storage device 105 and execute the texture mapping shown in the third embodiment in the main memory 103.
  • the image conversion program loaded in the main memory 103 may perform image reduction as shown in the first embodiment.
  • the image compression may be performed according to the second embodiment, and the illumination equation parameters are data-compressed and the network 108 isotropic force can be transmitted.
  • the camera 101 any type of imaging device such as a camera-equipped mobile phone, a digital still camera, or a video movie, can be applied. Furthermore, the present invention can be realized in a playback device that plays back pre-recorded video.
  • FIG. 15 is a diagram showing a second configuration example, which is an example of a configuration for performing image conversion according to the present invention using a server client system.
  • the resolution of the camera 201 is lower than the resolution of the display 202.
  • image enlargement is executed in the server-client system.
  • the server 301 includes an image feature analysis unit 501, an image feature vector database 502, and an illumination equation parameter database 503.
  • the server 301 calculates the input image repulsive illumination equation parameter IINL EP and sets the parameter operation unit 205. Output to. This operation corresponds to step S10 in the flow of FIG.
  • the image feature analysis unit 501, the image feature vector database 502, and the illumination equation parameter database 503 constitute a parameter acquisition unit.
  • an image conversion instruction is passed from the image conversion instruction unit 203 of the client 302 to the parameter operation instruction unit 204 of the server 301 as an image conversion instruction signal ICIS.
  • the norometer operation instruction unit 204 replaces the content of the image conversion by the image conversion instruction signal ICIS with the operation content of the illumination parameter, and outputs it to the parameter operation unit 205 as the parameter operation instruction signal LEPS.
  • the parameter operation unit 205 operates the illumination equation parameter IINLEP to perform image enlargement or image compression, and generates a new parameter value IOUTLEP. This operation corresponds to steps S20 and S30 in the flow of FIG.
  • the meter operation unit 205 corresponds to the homogeneous region specifying unit and the parameter converting unit.
  • the server 301 can provide the client 302 with the new parameter value IOUTLEP according to the image conversion instruction from the client 302 via the network 206.
  • an image generation unit 207 as a luminance calculation unit generates an enlarged image and supplies it to the display 202. This operation corresponds to step S40 in the flow of FIG.
  • the present invention is not limited to the configuration of FIG. 15, and when the resolution of the camera 201 is higher than the resolution of the display 202, the parameter operation unit 205 is as shown in the first embodiment. Image reduction may be performed. Further, if the parameter operation unit 205 operates as an encoding device according to the second embodiment, and the image generation unit 207 operates as a decoding device. , Compressed data can be distributed to the network 206.
  • the combination of image devices and the position of each means on the system are arbitrary.
  • the camera 201 any type of imaging device such as a mobile phone with camera, a digital still camera, or a video movie camera can be applied.
  • the present invention can also be realized in a playback apparatus that plays back pre-recorded video.
  • FIG. 16 is a diagram showing a third configuration example, which is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
  • the camera 401 includes a wide-angle lens 402, and can, for example, capture a wide field of view with an angle of view of 180 degrees at a time.
  • the light source 403 can be photographed by attaching the wide-angle lens 402 facing upward.
  • a wide-angle xyz three-dimensional coordinate system with the optical axis of the wide-angle lens 402 as the z-axis, the horizontal direction of the wide-angle image sensor 404 inside the camera 401 as the X-axis, and the vertical direction of the wide-angle image sensor 404 as the y-axis
  • the focal position of lens 402 is determined as the coordinate origin, and the light source vector L is obtained.
  • FIG. 17A shows the relationship between the position of the light source 403 and the wide-angle image 405 taken by the wide-angle lens 402.
  • the light source 403 moved from the position PS1 on the curve LT to the position PS5 is recorded from the position PXI on the straight line ST of the wide-angle image 405 to the position PX5.
  • a method for obtaining the light source vector L2 will be described in which the angle formed by the straight line ST and the x axis is 0, and the angle formed by the straight line ST and the light source vector L2 is ⁇ .
  • Figure 17 (b) is a view of the wide-angle image 405 of Figure 17 (a) from the z-axis direction.
  • the distance between position PX1 and coordinate origin O is d, and the distance between position PX2 and coordinate origin O is r.
  • the pixel positions at position ⁇ 1, position ⁇ 2, and coordinate origin ⁇ on the wide-angle image are ( x, y), (x, y), ( ⁇ , y), the distance d between the position PX1 and the coordinate origin O is
  • Figure 17 (c) shows a triangle obtained by subtracting the intersection line LT with the light source vector L2 from the position PX2 in the z-axis direction. If the length of the intersection line LT is z, the following equation is obtained.
  • the subject is photographed by the subject photographing lens 406 and the subject imaging element 407, and the first image output from the subject photographing element 407 is converted into the second image by the image conversion unit 408.
  • the image conversion unit 408 executes, for example, image enlargement according to the flowchart in FIG. 1, image compression according to FIG.
  • the coordinate system used for image conversion it is preferable to use the xyz three-dimensional coordinate system of the subject imaging element 407 because the image conversion is performed on the output of the subject imaging element 407. Therefore, the light source vector (Expression 14) expressed in the xyz three-dimensional coordinate system of the wide-angle imaging element 404 is converted into the xyz three-dimensional coordinate system of the subject imaging element 407.
  • the transformation of the coordinate system can be realized by transformation of the coordinate axes. Kool (X, y, z
  • ught light Is a vector in which the x-axis of the xyz three-dimensional coordinate system of the wide-angle imaging device 404 is represented by the xyz three-dimensional coordinate system of the subject imaging device 407: Kutor (X, y, z) is
  • the X axis of the xyz three-dimensional coordinate system of the image sensor 404 for wide angle, light, light, x light, x light is a vector expressed in the xyz three-dimensional coordinate system of the image sensor 404 for wide angle. If the y-axis and z-axis are defined in the same way as the X-axis, the vector of each axis is related to the 3 X 3 matrix M as follows.
  • the light source vector L is converted from the xyz three-dimensional coordinate system of the wide-angle imaging element 404 to the xyz three-dimensional coordinate system of the subject imaging element 407.
  • the light source is often located above the camera 401, for example, if the wide-angle lens 402 having an angle of view of 180 degrees is used, the light source 403 can be photographed. If 403 cannot be captured by the angle of view of the wide-angle lens 402, the direction of the camera 401 is changed and the light source 403 is captured by the angle of view. Therefore, since it is necessary to measure the change in the orientation of the camera 401, the camera 401 has a built-in 3D attitude sensor 409 (consisting of an acceleration sensor, etc.) to measure the 3D motion of the xyz 3D coordinate axis of the wide-angle imaging element 404. If it is acquired from the dimensional attitude sensor 409 and coordinate-transformed in the same way as (Equation 16),
  • the mobile phone 601 includes a far-end camera 602 (a camera that captures a subject in front of the user of the mobile phone 601) and a local camera 603 (a user of the mobile phone 601).
  • the other camera 602 opens the folded display 604 Change the direction greatly. That is, as shown in (a), when the opening angle DAG of the display unit 604 is small, the top of the mobile phone 601 is captured.
  • the xyz three-dimensional coordinate system for example, uses the focal position of its own camera 603 as the coordinate origin, and the relationship between the focal position of the other camera 602 determined by the structure of the mobile phone 601 and The captured images can be managed in the same xyz 3D coordinate system. It is obvious that the own camera 602 can also be used for photographing the light source. As described above, among the parameters of the illumination equation shown in FIG. 3, the light source vector can be calculated.
  • the camera 401 includes a polarizing filter, so that the reflected light from the object incident on the subject photographing lens 408 is diffused and reflected by the method described in (Equation 4) and Fig. 5, for example. It can be separated into reflection components. If the diffuse reflection component is used, the surface normal vector N can be calculated by the photometric stereo method described in (Equation 9). As described in (Equation 8), the photometric stereo method requires three or more images with different light source orientations. Therefore, if the light source 403 is movable, (Equation 8) can be obtained by setting three or more positions of the light source 403 and performing photographing each time.
  • the specular reflection component corresponds to ks ps in (Equation 1)
  • the unknown parameters included in (Equation 2) include the specular reflection component ratio ks and the Fresnel coefficient.
  • F Mic mouth facet distribution m and refractive index n.
  • the surface normal vector N can be measured by using a range finder in addition to the configuration of FIG. [0111]
  • the present invention can be executed in a wide variety of video devices such as widely used personal computers, server client systems, mobile phones with cameras, digital still cameras, video movie cameras, and televisions. No special equipment, operation or management is required. It should be noted that the device connection form and the internal configuration of the device, such as mounting on dedicated hardware or a combination of software and hardware, are not constrained.

Abstract

A plurality of parameters constituting a predetermined illumination equation giving a luminance to each of pixels in an image are acquired (S10). For each of the parameters, a homogeneous area formed by pixels having similar values of the parameter is identified (S20). For each parameter, according to the content of image conversion, the parameter conversion processing is performed for each of the identified homogenous areas (S30). By using each of the parameters after the conversion, luminance of each of pixels in a second image is obtained (S40).

Description

明 細 書  Specification
画像変換方法、装置およびプログラム、テクスチャマッピング方法、装置 およびプログラム、並びに、サーバークライアントシステム  Image conversion method, apparatus and program, texture mapping method, apparatus and program, and server client system
技術分野  Technical field
[0001] 本発明は、画像処理技術に関するものであり、特に、拡大や縮小などの画像変換、 画像圧縮、および、テクスチャマッピングを実現する技術に関するものである。  The present invention relates to an image processing technique, and more particularly to a technique for realizing image conversion such as enlargement or reduction, image compression, and texture mapping.
背景技術  Background art
[0002] 画像機器とネットワークのデジタルィ匕により、任意の画像機器が接続できるようにな り、画像交換の自由度が高まっている。そして、利用者がシステムの違いに制約を受 けることなぐ自由に画像を扱える環境が整備されてきた。例えば、利用者は、デジタ ルスチルカメラで撮影した画像をプリンタに出力したり、ネットワーク上で公開したり、 家庭のテレビで鑑賞したりすることが可能になっている。  Arbitrary image devices can be connected by the digital network of image devices and networks, and the degree of freedom of image exchange is increasing. In addition, an environment has been established in which users can freely handle images without being restricted by differences in systems. For example, users can output images taken with a digital still camera to a printer, publish them on a network, or view them on a home TV.
[0003] 逆にシステム側は、様々な画像フォーマットに対応する必要があり、画像フォーマツ ト変換には自ずと高度化が求められる。例えば画像サイズの変換は頻繁に発生し、 アップコンバータ (画像数、ライン数を増やす変換装置)やダウンコンバータ (画素数 、ライン数を減らす装置)が必要になる。例えば、 600dpiの解像度で A4用紙(297m m X 210mm)に印刷する場合、 7128画素 X 5040ラインの原稿が必要になる力 多 くのデジタルスチルカメラの解像度はこれを下回るため、アップコンバータが必要にな る。一方、ネットワーク上に公開された画像は、最終的な出力形態が決まっていない ため、出力デバイスが決まる都度、対応する画像サイズに変換する必要がある。また 、家庭用テレビでは、デジタル地上波のサービスが開始されており、従来の標準テレ ビ画像と HD (High Definition)テレビ画像とが混在し、画像サイズの変換が頻繁に用 いられる。  [0003] Conversely, the system side needs to support various image formats, and image format conversion is naturally required to be sophisticated. For example, image size conversion frequently occurs, and an up converter (a conversion device that increases the number of images and lines) and a down converter (a device that reduces the number of pixels and lines) are required. For example, when printing on A4 paper (297 mm x 210 mm) at a resolution of 600 dpi, a document that requires 7128 pixels x 5040 lines is required. The resolution of many digital still cameras is below this, so an upconverter is required. Become. On the other hand, since the final output form of the image published on the network is not decided, it is necessary to convert it to the corresponding image size every time the output device is decided. Also, digital terrestrial services have been started for home televisions, and conventional standard television images and HD (High Definition) television images are mixed, and image size conversion is frequently used.
[0004] 画像サイズが多岐に渡れば、画像圧縮におけるスケーラビリティの重要性が高まる 。スケーラビリティとは、 1つのビット'ストリームから、あるときは標準テレビの画像デー タを、あるときは HDテレビの画像データを取り出すことであり、様々な画像サイズを取 り出せる自由度を指す。スケーラビリティが確保されると、画像フォーマットごとに伝送 経路を準備する必要がなぐ伝送容量も少なくて済む。 [0004] The importance of scalability in image compression increases as the image sizes vary. Scalability refers to the ability to extract standard TV image data in some cases and HD TV image data in other cases from a single bit 'stream, and the degree of freedom to extract various image sizes. When scalability is secured, transmission is performed for each image format. Less transmission capacity is needed without preparing a route.
[0005] 画像拡大や画像縮小などの画像変換は、コンピュータグラフィックスにおけるテクス チヤマッピングでも多用される (被写体表面に現れる模様やパターンを総称して、「テ タスチヤ」と呼ぶ)。テクスチャマッピングはコンピュータ内に形成した 3次元物体表面 に 2次元画像を貼り付けて物体表面の模様や質感を表現する手法である。 2次元画 像を 3次元物体の表面の向きに合わせて貼り付けるため、 2次元画像に拡大、縮小、 変形、回転などの処理を施す必要がある (非特許文献 1を参照)。  [0005] Image conversion such as image enlargement and image reduction is also frequently used in texture mapping in computer graphics (patterns and patterns appearing on the surface of a subject are collectively referred to as “texture”). Texture mapping is a technique that expresses the pattern and texture of an object surface by attaching a 2D image to the surface of the 3D object formed in the computer. In order to paste a 2D image in accordance with the direction of the surface of the 3D object, it is necessary to perform processing such as enlargement, reduction, deformation, and rotation on the 2D image (see Non-Patent Document 1).
[0006] 従来、画像拡大、画像縮小、画像圧縮などの処理は、複数の画素間での輝度値の 違いを利用している。  [0006] Conventionally, processes such as image enlargement, image reduction, and image compression utilize differences in luminance values among a plurality of pixels.
[0007] すなわち、画像拡大では、サンプリング時に存在しな力つた画像データを新たに生 成するために、バイリニア法やバイキュービック法などによって、輝度値を内挿する( 非特許文献 1を参照)。内挿ではサンプリングデータの中間的な値しか生成できな ヽ ため、エッジなどの先鋭度が劣化する傾向がある。そこで、初期拡大画像として内挿 画像を用い、その後、エッジ部を抽出してエッジのみを強調する技術が開示されてい る (非特許文献 2、非特許文献 3を参照)。特に、非特許文献 3では、多重解像度表 現とリプシッツ指数の導入によって、エッジの先鋭さに応じてエッジ強調を選択的に 行う工夫が成されている。  In other words, in image enlargement, in order to newly generate image data that does not exist at the time of sampling, luminance values are interpolated by a bilinear method, a bicubic method, or the like (see Non-Patent Document 1). . Since interpolation can generate only intermediate values of sampling data, sharpness such as edges tends to deteriorate. Therefore, a technique is disclosed in which an interpolated image is used as an initial enlarged image, and thereafter, an edge portion is extracted to emphasize only the edge (see Non-Patent Document 2 and Non-Patent Document 3). In particular, Non-Patent Document 3 devised a technique for selectively performing edge enhancement according to the sharpness of an edge by introducing a multi-resolution expression and a Lipschitz index.
[0008] 画像縮小では、画素の一部を削除するが、画像縮小前には離れた位置にあった画 素が隣り合うと連続性が乱れ、モアレを生じてしまう。そこで、画素の一部を削除する 前に低域通過フィルタを掛けて、輝度変化を滑らかにし、その後、画素の一部を削除 する方法が一般的である。  [0008] In image reduction, a part of the pixels is deleted. However, if pixels located at a distant position before image reduction are adjacent to each other, continuity is disturbed and moire occurs. Therefore, a general method is to apply a low-pass filter before deleting a part of the pixel to smooth the luminance change, and then delete a part of the pixel.
[0009] さらに画像圧縮では、隣接画素間での輝度値の相関が高い性質を利用する。輝度 値の相関を表現するために、空間周波数成分を直交成分に分解する。直交変換に は離散コサイン変換が一般に利用され、隣接画素間での輝度値の相関の高さから低 周波項にエネルギーが集中するので、高周波項を削除して画像情報を圧縮する (非 特許文献 4を参照)。  [0009] Further, in image compression, the property that the correlation of luminance values between adjacent pixels is high is used. In order to express the correlation of luminance values, the spatial frequency component is decomposed into orthogonal components. Discrete cosine transform is generally used for orthogonal transformation, and energy concentrates on the low frequency term from the high correlation of luminance values between adjacent pixels, so the high frequency term is deleted and image information is compressed (Non-patent Document) (See 4).
特許文献 1 :特開 2005— 149390号公報  Patent Document 1: JP 2005-149390 A
非特許文献 1 :荒屋真ニ著, 「明解 3次元コンピュータグラフィックス」,共立出版, pp . 144—145、 2003年 9月 25日, Non-Patent Document 1: Shinya Araya, “Clear 3D Computer Graphics”, Kyoritsu Shuppan, pp 144—145, 25 September 2003,
特許文献 2 : H.Greenspan, C.H.Anderson,「Image enhanced by non-linear extrapol ation in frequect space」,SPIE Vol.2182 Image and Video Processing II, 1994年 非特許文献 3 :中静真ら、「多重スケール輝度こう配平面における画像高解像度化」、 電子情報通信学会論文誌 D—Π Vol. J81 -D-II No. 10 pp. 2249— 2258 1998年 10月  Patent Document 2: H. Greenspan, CH Anderson, “Image enhanced by non-linear extrapolation in frequect space”, SPIE Vol. 2182 Image and Video Processing II, 1994 Non-Patent Document 3: Nakashige et al., “Multi-scale Image Resolution on Luminance Gradient Planes, ”IEICE Transactions D—Π Vol. J81 -D-II No. 10 pp. 2249— 2258 Oct 1998
非特許文献 4 :マルチメディア通信研究会編, 「ポイント図解式 ブロードバンド +モ ノィル標準 MPEG教科書」,アスキー, pp. 25— 29、 2003年 2月 11日  Non-Patent Document 4: Multimedia Communication Study Group, “Point Illustrated Broadband + Mono Standard MPEG Textbook”, ASCII, pp. 25-29, February 11, 2003
非特許文献 5 :画像処理ハンドブック編集委員会編, 「画像処理ハンドブック」,昭晃 堂, pp. 393、 1987年 6月 80  Non-Patent Document 5: Image Processing Handbook Editorial Committee, “Image Processing Handbook”, Shogodo, pp. 393, June 1987 80
非特許文献 6 :梅山伸二, 「物体の見えからの拡散 Z鏡面反射成分の分離 偏光 フィルタを介した多重観測と確率的独立性を用いて一」,画像の認識'理解シンポジ ゥム 2002, pp. 1-469 - pp. 1—476、 2002年  Non-Patent Document 6: Shinji Umeyama, “Separation of Diffuse Z Specular Reflection Components from Object Appearance Using Multiple Observations and Stochastic Independence through Polarization Filters”, Image Recognition 'Understanding Symposium 2002, pp 1-469-pp. 1-476, 2002
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0010] ところが、従来の技術では次のような問題があった。 [0010] However, the conventional technique has the following problems.
[0011] 上述したように、画素間の輝度の違いを用いて、画像拡大、画像縮小、画像圧縮等 の画像変換を行った場合、エッジ成分とノイズとの分離が難しぐ画像変換によって 画質が劣化してしまう可能性が高 、、という問題がある。  [0011] As described above, when image conversion such as image enlargement, image reduction, and image compression is performed using a difference in luminance between pixels, image quality is improved by image conversion in which separation of edge components and noise is difficult. There is a problem that the possibility of deterioration is high.
[0012] すなわち、画像拡大では、内挿によってぼけた初期拡大画像のエッジ部を強調す るため、エッジ部とともにノイズも強調されてしまい、画像劣化を招くおそれがある。ま た、画像圧縮では、ノイズが隣接画素間の相関を低め、圧縮効率を低下させる原因 となる。さらに、画像拡大における内挿画像のエッジ強調や、画像縮小における平滑 化は、経験的手法であり、明確なノイズ対策が施されていないため、画像変換後の画 質を保証できない、と言った問題も有する。  That is, in the image enlargement, since the edge portion of the initial enlarged image blurred by the interpolation is emphasized, noise is also enhanced together with the edge portion, which may cause image deterioration. In addition, in image compression, noise reduces the correlation between adjacent pixels and causes compression efficiency to decrease. Furthermore, it was said that edge enhancement of interpolated images during image enlargement and smoothing during image reduction are empirical methods, and no clear noise countermeasures are taken, so image quality after image conversion cannot be guaranteed. There are also problems.
[0013] 前記の問題に鑑み、本発明は、画像変換、画像圧縮およびテクスチャマッピングに おいて、従来よりもノイズの影響を受けに《し、画質をより安定させることを課題とす る。 課題を解決するための手段 In view of the above problems, an object of the present invention is to make the image quality more stable in image conversion, image compression, and texture mapping by making it less susceptible to noise than in the past. Means for solving the problem
[0014] 本発明は、画像変換方法として、第 1の画像の各画素について、輝度を与える所定 の照明方程式を構成する複数のパラメータをそれぞれ取得し、各パラメータ毎に、当 該パラメータの値が類似している画素力 なる同質領域を特定し、各パラメータ毎に 、画像変換の内容に応じて、特定した同質領域毎に、当該パラメータの変換処理を 行い、変換処理後の各パラメータを用いて、第 2の画像の各画素の輝度を求めるもの である。  [0014] According to the present invention, as an image conversion method, for each pixel of the first image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are acquired, and the value of the parameter is determined for each parameter. Homogeneous regions with similar pixel power are specified, and for each parameter, the parameter conversion processing is performed for each specified homogeneous region according to the content of the image conversion, and each parameter after the conversion processing is used. The brightness of each pixel of the second image is obtained.
[0015] この発明〖こよると、画像変換の対象となる第 1の画像について、輝度を与える照明 方程式を構成する複数のパラメータを、それぞれ取得する。ここでいうパラメータとは 、例えば、被写体の光学特性、環境条件、被写体の表面法線などである。そして、各 ノ メータ毎に同質領域を特定し、画像変換の内容に応じて、特定した同質領域毎 に、当該パラメータの変換処理を行う。画像変換後の第 2の画像の各画素の輝度は、 変換処理後の各パラメータを用いて、求められる。  [0015] According to the present invention, a plurality of parameters constituting an illumination equation that gives luminance are respectively acquired for the first image to be subjected to image conversion. The parameters referred to here are, for example, the optical characteristics of the subject, environmental conditions, the surface normal of the subject, and the like. Then, a homogeneous region is specified for each meter, and the parameter conversion process is performed for each specified homogeneous region according to the content of the image conversion. The luminance of each pixel of the second image after image conversion is obtained using each parameter after conversion processing.
[0016] すなわち、輝度を照明方程式パラメータに分解して、各パラメータ毎に画素間の相 関を利用して、画像変換を行う。照明方程式パラメータは、例えば表面法線や光学 特性のように独立性が高い。このため、パラメータ毎に処理を行う場合、ノラメータの 積分値として与えられる輝度を用いて処理を行う場合に比べて、ノイズの特異性を捉 えやすい。さらに光学特性は、独立性の高い要因である拡散反射成分や鏡面反射 成分に分解できるため、ノイズの特異性を際立たせることができる。また、同質領域は 、被写体の物理的特性である照明方程式パラメータの類似性に基づ 、て特定される ため、いわば、物理的裏付けをもって定められたものとなる。そして、画像変換は、各 ノ メータに対して、同質領域毎に実行されるため、エッジ部は同質領域間の境界 条件として保存される。したがって、エッジの先鋭さやテクスチャ感を保存したまま、 画質の安定した画像変換を実現することができる。し力も、従来技術のようにエッジを 直接検出する必要がなぐノイズ混入の問題は生じない。  That is, the luminance is decomposed into illumination equation parameters, and image conversion is performed using the correlation between pixels for each parameter. The illumination equation parameters are highly independent, such as surface normals and optical properties. For this reason, when processing is performed for each parameter, it is easier to grasp the peculiarities of noise than when processing is performed using the luminance given as the integral value of the norameter. Furthermore, the optical characteristics can be decomposed into diffuse reflection components and specular reflection components, which are highly independent factors, so that the peculiarities of noise can be emphasized. In addition, the homogeneous region is specified based on the similarity of the illumination equation parameters, which are physical characteristics of the subject, so that it is defined with physical support. Since image conversion is performed for each homogeneous region for each homogeneous region, the edge portion is stored as a boundary condition between the homogeneous regions. Therefore, it is possible to realize image conversion with stable image quality while preserving the sharpness of edges and texture. Also, the problem of noise mixing that does not require the direct detection of edges as in the prior art does not occur.
[0017] そして、前記本発明の画像変換方法にお!ヽて、画像拡大を行う場合は、各パラメ一 タの変換処理として、当該パラメータを高密化する処理を行えばよい。上述したように 、本発明では、同質領域はいわば物理的裏付けをもって定められたものとなる。この ため、従来のような内挿補間した初期拡大画像をエッジ強調する経験的な技術に比 ベて、同質領域毎にパラメータを高密化する本発明は、客観的なものであり、拡大画 像の画質をより安定化させることができる。 [0017] In the image conversion method of the present invention, when image enlargement is performed, a process for increasing the density of the parameters may be performed as the conversion process for each parameter. As described above, in the present invention, the homogeneous region is defined with physical support. this Therefore, compared to the conventional empirical technique for edge enhancement of the initial enlarged image obtained by interpolation, the present invention for increasing the density of parameters for each homogeneous region is an objective one. The image quality can be further stabilized.
[0018] また、前記本発明の画像変換方法にお!ヽて、画像縮小を行う場合は、各パラメータ の変換処理として、当該パラメータを低密化する処理を行えばよい。画像拡大の場合 と同様に、従来のような低域通過フィルタを用いる経験的な手法に比べて、同質領域 毎にパラメータを低密化させる本発明は、客観的なものであり、縮小画像の画質をよ り安定ィ匕させることができる。  [0018] Further, when performing image reduction in the image conversion method of the present invention, as a conversion process of each parameter, a process for reducing the density of the parameter may be performed. As in the case of image enlargement, the present invention for reducing the parameters for each homogeneous region is objective compared to the empirical method using a low-pass filter as in the prior art. The image quality can be made more stable.
[0019] また、本発明は、画像圧縮方法として、画像の各画素について、輝度を与える所定 の照明方程式を構成する複数のパラメータをそれぞれ取得し、各パラメータ毎に、当 該パラメータの値が類似している画素力 なる同質領域を特定し、各パラメータ毎に 、特定した同質領域毎に、当該パラメータの圧縮符号ィ匕を行うものである。  [0019] Further, according to the present invention, as an image compression method, for each pixel of an image, a plurality of parameters constituting a predetermined illumination equation that gives luminance are respectively acquired, and the value of the parameter is similar for each parameter. A homogenous region having a pixel power is specified, and for each parameter, compression coding of the parameter is performed for each specified homogenous region.
[0020] この発明〖こよると、画像圧縮の対象となる画像について、輝度を与える照明方程式 を構成する複数のパラメータを、それぞれ取得する。そして、各パラメータ毎に同質 領域を特定し、特定した同質領域毎に、当該パラメータの圧縮符号化を行う。同質領 域内では、照明方程式パラメータに関して近傍画素間での相関が高いので、輝度値 をベースとした画像圧縮よりも、圧縮効率を向上させることができる。また、エッジ部は 同質領域間の境界条件として保存される。したがって、エッジの先鋭さやテクスチャ 感を保存したまま、圧縮効率の高 、画像圧縮を実現することができる。  [0020] According to the present invention, a plurality of parameters constituting an illumination equation for giving brightness are acquired for each image to be compressed. Then, a homogeneous region is specified for each parameter, and the parameter is compressed and encoded for each specified homogeneous region. In the homogeneous area, the correlation between neighboring pixels is high with respect to the illumination equation parameter, so that the compression efficiency can be improved over the image compression based on the luminance value. The edge part is saved as a boundary condition between homogeneous regions. Therefore, it is possible to realize image compression with high compression efficiency while preserving the sharpness of the edges and the texture.
[0021] また、本発明は、テクスチャマッピング方法として、テクスチャ画像を 3次元 CGモデ ルのオブジェクトに貼り付ける前処理を行 、、オブジェクトに貼り付けられたテクスチャ 画像の各画素について、輝度を与える所定の照明方程式を構成する複数のパラメ一 タをそれぞれ取得し、各パラメータ毎に、当該パラメータの値が類似している画素か らなる同質領域を特定し、各パラメータ毎に、所定の画像変換の内容に応じて、特定 した同質領域毎に、当該パラメータの変換処理を行い、変換処理後の各パラメータを 用いて、オブジェクトの画像の各画素の輝度を求めるものである。  [0021] Further, the present invention provides a texture mapping method in which a preprocessing for pasting a texture image to an object of a three-dimensional CG model is performed, and brightness is given to each pixel of the texture image pasted to the object. A plurality of parameters constituting the lighting equation are obtained, a homogeneous region consisting of pixels with similar parameter values is identified for each parameter, and a predetermined image conversion is performed for each parameter. Depending on the content, the parameter conversion processing is performed for each identified homogeneous region, and the luminance of each pixel of the object image is obtained using each parameter after the conversion processing.
[0022] この発明〖こよると、上述した画像変換方法と同様に、エッジの先鋭さやテクスチャ感 を保存したまま、画質の安定したテクスチャマッピングを実現することができる。 発明の効果 [0022] According to the present invention, as in the image conversion method described above, texture mapping with stable image quality can be realized while preserving edge sharpness and texture. The invention's effect
[0023] 本発明によると、輝度値を構成する照明方程式パラメータ毎に、同質領域毎に、変 換処理を行うので、エッジの先鋭さやテクスチャ感を保存したまま、画質の安定した 画像変換やテクスチャマッピングを実現することができる。また、エッジの先鋭さやテ タスチヤ感を保存したまま、圧縮効率の高!、画像圧縮を実現することができる。  [0023] According to the present invention, since the conversion process is performed for each homogeneous region for each illumination equation parameter constituting the luminance value, image conversion and texture with stable image quality while preserving edge sharpness and texture. Mapping can be realized. In addition, it is possible to achieve high compression efficiency and image compression while preserving the sharpness of edges and texture.
図面の簡単な説明  Brief Description of Drawings
[0024] [図 1]図 1は、本発明の第 1の実施形態に係る画像変換方法を示すフローチャートで ある。  FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention.
[図 2]図 2は、輝度と照明方程式パラメータとの関係を示す模式図である。  FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters.
[図 3]図 3は、照明方程式の前提となる幾何条件を示す概念図である。  [FIG. 3] FIG. 3 is a conceptual diagram showing a geometric condition which is a premise of an illumination equation.
[図 4]図 4は、表面法線ベクトルの計測手法の例を説明するための図である。  FIG. 4 is a diagram for explaining an example of a surface normal vector measurement method.
[図 5]図 5は、拡散反射と鏡面反射とを分離する手法の例を説明するための図である  FIG. 5 is a diagram for explaining an example of a technique for separating diffuse reflection and specular reflection.
[図 6]図 6は、学習データを参照して照明方程式パラメータを取得する方法を説明す るための図である。 [FIG. 6] FIG. 6 is a diagram for explaining a method of acquiring illumination equation parameters with reference to learning data.
[図 7]図 7は、同質領域の判定を行うパターンを示す図である。  FIG. 7 is a diagram showing a pattern for determining a homogeneous region.
[図 8]図 8は、単位領域の走査方法の一例を示す図である。  FIG. 8 is a diagram showing an example of a unit area scanning method.
[図 9]図 9は、ノイズ除去の一例を示す図である。  FIG. 9 is a diagram showing an example of noise removal.
[図 10]図 10は、画像拡大のためにパラメータを高密化する処理を示す図である。  [FIG. 10] FIG. 10 is a diagram showing processing for increasing the density of parameters for image enlargement.
[図 11]図 11は、画像縮小のためにパラメータを低密化する処理を示す図である。  [FIG. 11] FIG. 11 is a diagram showing processing for reducing parameters for image reduction.
[図 12]図 12は、本発明の第 2の実施形態における、画像圧縮のためのパラメータ変 換処理を示す概念図である。  FIG. 12 is a conceptual diagram showing parameter conversion processing for image compression in the second embodiment of the present invention.
[図 13]図 13は、本発明の第 3の実施形態を説明するための図であり、レンダリング処 理の流れを示す図である。  FIG. 13 is a diagram for explaining a third embodiment of the present invention, and shows a flow of a rendering process.
[図 14]図 14は、本発明を実現する第 1の構成例であり、ノ一ソナルコンピュータを用 いた構成を示す図である。  FIG. 14 is a diagram illustrating a first configuration example that implements the present invention, and illustrates a configuration using a non-sonal computer.
[図 15]図 15は、本発明を実現する第 2の構成例であり、サーバークライアントシステム を用いた構成を示す図である。 [図 16]図 16は、本発明を実現する第 3の構成例であり、カメラでの撮影において本発 明に係る画像変換を行う構成の一例である。 FIG. 15 is a second configuration example for realizing the present invention, and is a diagram showing a configuration using a server client system. FIG. 16 is a third configuration example for realizing the present invention, and is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
[図 17]図 17は、光源の位置と広角レンズでの撮影画像との関係を示す図である。  FIG. 17 is a diagram showing the relationship between the position of the light source and the image taken with the wide-angle lens.
[図 18]図 18は、本発明を実現する第 3の構成例であり、折り畳み式携帯電話を用い た構成を示す図である。  FIG. 18 is a diagram showing a third configuration example for realizing the present invention, showing a configuration using a folding mobile phone.
符号の説明  Explanation of symbols
[0025] S10 第 1のステップ [0025] S10 first step
S20 第 2のステップ  S20 second step
S30 第 3のステップ  S30 3rd step
S40 第 4のステップ  S40 4th step
AA1〜AA3、 AB1〜AB3、 AC1〜AC3、 AD1〜AD3、 AE1, AE2、 AF1, AF2 AA1 to AA3, AB1 to AB3, AC1 to AC3, AD1 to AD3, AE1, AE2, AF1, AF2
、 AG1, AG2 同質領域 , AG1, AG2 Homogeneous area
TIA, TIB テクスチャ画像  TIA, TIB texture image
OA, OB オブジェクト  OA, OB object
205 パラメータ操作部  205 Parameter operation section
207 画像生成部  207 Image generator
301 サーバー  301 server
302 クライアント  302 clients
501 画像特徴解析部  501 Image feature analysis unit
502 画像特徴ベクトルデータベース  502 Image feature vector database
503 照明方程式パラメータデータベース  503 Lighting equation parameter database
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0026] 本発明の第 1態様では、第 1の画像に対して所定の画像変換を行い、第 2の画像を 生成する方法として、前記第 1の画像の各画素について、輝度を与える所定の照明 方程式を構成する複数のパラメータをそれぞれ取得する第 1のステップと、前記各パ ラメータ毎に、当該パラメータの値が類似している画素からなる同質領域を特定する 第 2のステップと、前記各パラメータ毎に、前記所定の画像変換の内容に応じて、前 記第 2のステップにおいて特定した同質領域毎に、当該パラメータの変換処理を行う 第 3のステップと、前記第 3のステップにおける変換処理後の各パラメータを用いて、 前記第 2の画像の各画素の輝度を求める第 4のステップとを備えたものを提供する。 [0026] In the first aspect of the present invention, as a method of performing a predetermined image conversion on the first image and generating a second image, a predetermined value that gives luminance to each pixel of the first image is provided. A first step of acquiring each of a plurality of parameters constituting the illumination equation, a second step of identifying a homogeneous region composed of pixels having similar parameter values for each of the parameters, For each parameter, according to the content of the predetermined image conversion, the parameter conversion process is performed for each homogeneous region specified in the second step. A third step and a fourth step for obtaining the luminance of each pixel of the second image using each parameter after the conversion process in the third step are provided.
[0027] 本発明の第 2態様では、前記所定の画像変換は画像拡大であり、前記第 3のステツ プにおける変換処理は、当該パラメータを高密化する処理である第 1態様の画像変 換方法を提供する。 [0027] In the second aspect of the present invention, the predetermined image conversion is image enlargement, and the conversion process in the third step is a process for increasing the density of the parameter. I will provide a.
[0028] 本発明の第 3態様では、前記所定の画像変換は画像縮小であり、前記第 3のステツ プにおける変換処理は、当該パラメータを低密化する処理である第 1態様の画像変 換方法を提供する。  [0028] In the third aspect of the present invention, the predetermined image conversion is image reduction, and the conversion process in the third step is a process for reducing the density of the parameter. Provide a method.
[0029] 本発明の第 4態様では、前記第 1のステップにおける前記複数のパラメータの取得 は、被写体力 の計測または前記第 1の画像力 の推定によって行う第 1態様の画像 変換方法を提供する。  [0029] In a fourth aspect of the present invention, there is provided the image conversion method according to the first aspect, wherein the acquisition of the plurality of parameters in the first step is performed by measuring subject force or estimating the first image force. .
[0030] 本発明の第 5態様では、前記第 2のステップにおいて、複数画素における当該パラ メータの値の分散を用いて類似度合の評価を行う第 1態様の画像変換方法を提供す る。  [0030] In a fifth aspect of the present invention, there is provided the image conversion method according to the first aspect, wherein, in the second step, the degree of similarity is evaluated using variances of values of the parameters in a plurality of pixels.
[0031] 本発明の第 6態様では、前記第 2のステップは、特定した同質領域内のノイズ除去 を行う処理を含む第 1態様の画像変換方法を提供する。  [0031] In a sixth aspect of the present invention, there is provided the image conversion method according to the first aspect, wherein the second step includes a process of performing noise removal in the specified homogeneous region.
[0032] 本発明の第 7態様では、テクスチャマッピング方法として、テクスチャ画像を 3次元 C Gモデルのオブジェクトに貼り付ける前処理ステップと、前記オブジェクトに貼り付けら れた前記テクスチャ画像の各画素について、輝度を与える所定の照明方程式を構成 する複数のパラメータをそれぞれ取得する第 1のステップと、前記各パラメータ毎に、 当該パラメータの値が類似している画素力 なる同質領域を特定する第 2のステップ と、前記各パラメータ毎に、所定の画像変換の内容に応じて、前記第 2のステップに おいて特定した同質領域毎に、当該パラメータの変換処理を行う第 3のステップと、 前記第 3のステップにおける変換処理後の各パラメータを用いて、前記オブジェクト の画像の各画素の輝度を求める第 4のステップとを備えたものを提供する。  [0032] In the seventh aspect of the present invention, as a texture mapping method, a preprocessing step of pasting a texture image onto an object of a three-dimensional CG model, and a luminance for each pixel of the texture image pasted on the object A first step of acquiring each of a plurality of parameters constituting a predetermined illumination equation for giving a predetermined illumination equation, and a second step of specifying a homogeneous region having a pixel force with a similar value of the parameter for each of the parameters, and A third step of performing a conversion process of the parameter for each homogeneous region identified in the second step according to the content of predetermined image conversion for each parameter; and the third step And a fourth step of obtaining the luminance of each pixel of the image of the object using each parameter after the conversion processing in To provide things.
[0033] 本発明の第 8態様では、第 1の画像に対して所定の画像変換を行い、第 2の画像を 生成する装置として、前記第 1の画像の各画素について、輝度を与える所定の照明 方程式を構成する複数のパラメータをそれぞれ取得するパラメータ取得部と、前記各 パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域を特定す る同質領域特定部と、前記各パラメータ毎に、前記所定の画像変換の内容に応じて 、前記同質領域特定部によって特定された同質領域毎に、当該パラメータの変換処 理を行うパラメータ変換部と、前記パラメータ変換部による変換処理後の各パラメータ を用いて、前記第 2の画像の各画素の輝度を求める輝度算出部とを備えたものを提 供する。 [0033] In the eighth aspect of the present invention, as a device that performs predetermined image conversion on the first image and generates a second image, a predetermined value that gives brightness to each pixel of the first image is provided. A parameter acquisition unit for acquiring a plurality of parameters constituting the illumination equation; For each parameter, a homogeneous region specifying unit that specifies a homogeneous region having a similar pixel value and also having a pixel power, and for each of the parameters, the homogeneous region specifying unit according to the content of the predetermined image conversion For each homogenous region identified by the parameter conversion unit that performs the conversion process of the parameter, and the luminance for obtaining the luminance of each pixel of the second image using each parameter after the conversion process by the parameter conversion unit Provided with a calculator.
[0034] 本発明の第 9態様では、テクスチャマッピング装置として、テクスチャ画像を 3次元 C Gモデルのオブジェクトに貼り付ける前処理部と、前記オブジェクトに貼り付けられた 前記テクスチャ画像の各画素について、輝度を与える所定の照明方程式を構成する 複数のパラメータをそれぞれ取得するパラメータ取得部と、前記各パラメータ毎に、 当該パラメータの値が類似して!/ヽる画素からなる同質領域を特定する同質領域特定 部と、前記各パラメータ毎に、所定の画像変換の内容に応じて、前記同質領域特定 部によって特定された同質領域毎に当該パラメータの変換処理を行うパラメータ変換 部と、前記パラメータ変換部による変換処理後の各パラメータを用いて、前記ォブジ ェタトの画像の各画素の輝度を求める輝度算出部とを備えたものを提供する。  [0034] In the ninth aspect of the present invention, as a texture mapping device, a pre-processing unit that pastes a texture image on an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted on the object A parameter acquisition unit that acquires each of a plurality of parameters constituting a given illumination equation to be given, and a homogeneous region specifying unit that specifies a homogeneous region composed of pixels having similar parameter values for each parameter! A parameter conversion unit that performs conversion processing of the parameter for each homogeneous region specified by the homogeneous region specification unit according to the content of predetermined image conversion for each parameter, and conversion processing by the parameter conversion unit A luminance calculation unit that obtains the luminance of each pixel of the image of the object by using each parameter later. To provide things.
[0035] 本発明の第 10態様では、画像変換を行うサーバークライアントシステムとして、第 8 態様のパラメータ取得部、同質領域特定部およびパラメータ変換部を有するサーバ 一と、第 8態様の輝度算出部を有するクライアントとを備え、前記クライアントは、前記 サーバーに画像変更の内容を指示するものを提供する。  [0035] In the tenth aspect of the present invention, the server client system that performs image conversion includes the server having the parameter acquisition unit, the homogeneous region specifying unit, and the parameter conversion unit of the eighth aspect, and the luminance calculation unit of the eighth aspect. And the client provides the server with an instruction to change the contents of the image.
[0036] 本発明の第 11態様では、第 1の画像に対して所定の画像変換を行い、第 2の画像 を生成する方法をコンピュータに実行させるプログラムとして、前記第 1の画像の各画 素について、輝度を与える所定の照明方程式を構成する複数のノ メータを、それ ぞれ、取得する第 1のステップと、前記各パラメータ毎に、当該パラメータの値が類似 している画素からなる同質領域を特定する第 2のステップと、前記各パラメータ毎に、 前記所定の画像変換の内容に応じて、前記第 2のステップにおいて特定した同質領 域毎に、当該パラメータの変換処理を行う第 3のステップと、前記第 3のステップにお ける変換処理後の各パラメータを用いて、前記第 2の画像の各画素の輝度を求める 第 4のステップとをコンピュータに実行させるものを提供する。 [0037] 本発明の第 12態様では、テクスチャマッピングプログラムとして、テクスチャ画像を 3 次元 CGモデルのオブジェクトに貼り付ける前処理ステップと、前記オブジェクトに貼り 付けられた前記テクスチャ画像の各画素について、輝度を与える所定の照明方程式 を構成する複数のパラメータをそれぞれ取得する第 1のステップと、前記各パラメータ 毎に、当該パラメータの値が類似している画素からなる同質領域を特定する第 2のス テツプと、前記各パラメータ毎に、所定の画像変換の内容に応じて、前記第 2のステ ップにおいて特定した同質領域毎に当該パラメータの変換処理を行う第 3のステップ と、前記第 3のステップにおける変換処理後の各パラメータを用いて、前記オブジェク トの画像の各画素の輝度を求める第 4のステップとをコンピュータに実行させるものを 提供する。 [0036] In the eleventh aspect of the present invention, each pixel of the first image is used as a program for causing a computer to execute a method of performing a predetermined image conversion on the first image and generating a second image. For each of the parameters, a homogeneous region composed of pixels having similar values for each of the plurality of parameters constituting a predetermined illumination equation that gives brightness. A second step of identifying a parameter, and a third parameter conversion process for each of the parameters for each homogeneous region identified in the second step according to the content of the predetermined image conversion. There is provided a program for causing a computer to execute a step and a fourth step of obtaining the luminance of each pixel of the second image using each parameter after the conversion processing in the third step. [0037] In the twelfth aspect of the present invention, as a texture mapping program, a pre-processing step of pasting a texture image onto an object of a three-dimensional CG model, and brightness for each pixel of the texture image pasted onto the object A first step of acquiring a plurality of parameters constituting a given illumination equation to be given, and a second step of specifying a homogeneous region composed of pixels having similar parameter values for each of the parameters; In each of the parameters, a third step of performing conversion processing of the parameter for each homogeneous region specified in the second step according to the content of predetermined image conversion, and in the third step A fourth step of determining the luminance of each pixel of the image of the object using each parameter after the conversion process; It provides that causes a computer to execute.
[0038] 以下、本発明の実施の形態について、図面を参照して説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
[0039] (第 1の実施形態) [0039] (First embodiment)
図 1は本発明の第 1の実施形態に係る画像変換方法を示すフローチャートである。 なお、本実施形態に係る画像変換方法は、当該方法を実現するためのプログラムを コンピュータに実行させることによって、実現することができる。  FIG. 1 is a flowchart showing an image conversion method according to the first embodiment of the present invention. Note that the image conversion method according to the present embodiment can be realized by causing a computer to execute a program for realizing the method.
[0040] 本実施形態では、輝度を与える照明方程式として、例えば (数 1)および (数 2)に示 す式を用いて、この式を構成する複数のパラメータ毎に、同質領域を特定する。そし て、同質領域毎に、当該パラメータの変換処理を行い、所定の画像変換を実現する In the present embodiment, for example, the equations shown in (Equation 1) and (Equation 2) are used as the illumination equations that give the luminance, and the homogeneous region is specified for each of a plurality of parameters constituting this equation. Then, for each homogeneous region, the parameter conversion process is performed to realize the predetermined image conversion.
[数 1] [数 2] [Number 1] [Equation 2]
4m2 cos4 β4m 2 cos 4 β
g2 = n2 + c2一 1 g 2 = n 2 + c 2 1 1
c = ( H)  c = (H)
[0041] ここで、 laは環境光の輝度、 aは環境光の反射率、 Iiは照明の輝度、ベクトル Nは 表面法線ベクトル、ベクトル Lは光源方向を表わす光源ベクトル、 d coは光源の立体 角、 は拡散反射成分の双方向反射率、 p sは鏡面反射成分の双方向反射率、 F λはフレネル係数、 mはマイクロファセット分布、 nは屈折率である。また、 kdは拡散 反射成分比、 ksは鏡面反射成分比であり、 kd+ks = lの関係を持つ。ベクトル Hは 光源ベクトル Lと視点ベクトル Vとの中間に位置するハーフベクトル、 βは表面法線べ タトル Νと視点ベクトル Vとのなす角度で、光源ベクトル L、表面法線ベクトル N、視点 ベクトル V力も算出できる。 [0041] where la is the brightness of the ambient light, a is the reflectance of the ambient light, Ii is the brightness of the illumination, the vector N is the surface normal vector, the vector L is the light source vector indicating the light source direction, and d co is the light source The solid angle, is the bidirectional reflectance of the diffuse reflection component, ps is the bidirectional reflectance of the specular reflection component, F λ is the Fresnel coefficient, m is the microfacet distribution, and n is the refractive index. Kd is the diffuse reflection component ratio, ks is the specular reflection component ratio, and kd + ks = l. Vector H is a half vector located between light source vector L and viewpoint vector V, β is the angle between surface normal vector Ν and viewpoint vector V, and light source vector L, surface normal vector N, viewpoint vector V Force can also be calculated.
[0042] 図 2は輝度と照明方程式パラメータとの関係を示す模式図である。同図中、 (a)は( b)に示す画像の輝度の分布を示すグラフ、 (c)〜 (f)は照明方程式パラメータのうち 、拡散反射成分の双方向反射率 p d、鏡面反射成分の双方向反射率 p s、拡散反射 成分比 kdおよび表面法線ベクトル Nの分布を、それぞれ示すグラフである。図 2 (a) , (c)〜(f)のグラフにおいて、横軸は空間位置、縦軸は輝度または各パラメータの値 である。  FIG. 2 is a schematic diagram showing the relationship between luminance and illumination equation parameters. In the figure, (a) is a graph showing the luminance distribution of the image shown in (b), and (c) to (f) are the bi-directional reflectance pd of the diffuse reflection component and the specular reflection component of the illumination equation parameters. 4 is a graph showing the distribution of bidirectional reflectance ps, diffuse reflection component ratio kd, and surface normal vector N, respectively. In the graphs of Fig. 2 (a) and (c) to (f), the horizontal axis is the spatial position, and the vertical axis is the brightness or the value of each parameter.
[0043] 図 2 (b)の画像では、 4種類のオブジェクト XI〜X4が存在する。オブジェクト XIは 左から右へ明るくなる輝度分布、オブジェクト X2は規則性のな ヽランダムな輝度分布 、オブジェクト X3は中央部にノ、イライトを持つ輝度分布、オブジェクト X4は全空間位 置にお 、て等輝度の分布を持つ。  [0043] In the image of FIG. 2 (b), there are four types of objects XI to X4. Object XI is a luminance distribution that brightens from left to right, object X2 is a regular, random luminance distribution, object X3 is a luminance distribution with no and illite in the center, and object X4 is in all space positions. It has an isoluminance distribution.
[0044] オブジェクト XIの範囲では、拡散反射成分の双方向反射率 p d、拡散反射成分比 kdおよび表面法線ベクトル N力 同質領域 (AAl , AC1 , AD1)をそれぞれ持ち、鏡 面反射成分の双方向反射率 p sのみが変化している。この p Sの変化が、輝度の変 化を生じさせている。オブジェクト X2の範囲では、拡散反射成分の双方向反射率 p d、鏡面反射成分の双方向反射率 p sおよび表面法線ベクトル N力 同質領域 (AA2 , AB1, AD2)をそれぞれ持ち、拡散反射成分比 kdのみが変化している。拡散反射 成分比 kdは、規則性のないランダムな変化を持ち、輝度もランダムに変化して、細か なテクスチャを形成して 、る。 [0044] In the range of the object XI, the bidirectional reflectance pd of the diffuse reflection component, the diffuse reflection component ratio kd, and the surface normal vector N force have homogeneous regions (AAl, AC1, AD1), respectively. Only the bidirectional reflectance ps of the surface reflection component changes. This change in pS causes a change in brightness. In the range of object X2, the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance ps of the specular reflection component, and the surface normal vector N force have homogeneous regions (AA2, AB1, AD2), respectively, and the diffuse reflection component ratio kd Only has changed. The diffuse reflection component ratio kd has a random change with no regularity, and the brightness also changes randomly to form a fine texture.
[0045] オブジェクト X3の範囲では、拡散反射成分の双方向反射率 p d、鏡面反射成分の 双方向反射率/ 0 sおよび拡散反射成分比 kdが、同質領域 (AA2、 AB2、 AC2)を持 ち、表面法線ベクトル Nのみが変化している。この Nの変化力 輝度の変化を生じさ せている。オブジェクト X4の範囲では、各パラメータ ;0 (1、 s, kd、 Nが全て同質領 域 (AA3、 AB3、 AC3、 AD3)を持っため、輝度値は一定である。なお、拡散反射 成分比 kdが高く拡散反射成分が主であり(図 2 (e) )、拡散反射成分の双方向反射率 p dが低!、ため(図 2 (c) )、オブジェクト X4の範囲での輝度値は低!、。  [0045] In the range of object X3, the bidirectional reflectance pd of the diffuse reflection component, the bidirectional reflectance / 0 s of the specular reflection component, and the diffuse reflection component ratio kd have homogeneous regions (AA2, AB2, AC2). Only the surface normal vector N changes. This N change force causes a change in brightness. In the range of object X4, each parameter; 0 (1, s, kd, N has the same homogeneous area (AA3, AB3, AC3, AD3), so the luminance value is constant.The diffuse reflection component ratio kd The diffuse reflectance component is mainly high (Fig. 2 (e)), and the bidirectional reflectance pd of the diffuse reflection component is low! (Fig. 2 (c)), so the luminance value in the range of the object X4 is low! ,.
[0046] 非特許文献 2, 3に示された従来の画像拡大処理では、図 2 (a)のような輝度変化 力もエッジを検出し、これを強調している。この場合、輝度変化からのエッジ抽出はノ ィズとの分離が困難であり、エッジ強調によってノイズも強調されてしまう、という問題 がある。  [0046] In the conventional image enlargement processing shown in Non-Patent Documents 2 and 3, the brightness change force as shown in FIG. 2 (a) is also detected and emphasized. In this case, there is a problem that edge extraction from luminance change is difficult to separate from noise, and noise is also enhanced by edge enhancement.
[0047] (数 1)力 分力るように、照明方程式を構成するパラメータ力^つでも変化すれば、 輝度は変化する。そこで、エッジの検出は、輝度変化力も行うよりも、パラメータ毎に 行う方が安定であることが理解できる。本実施形態では、エッジは、異なる同質領域 が近接して生じるので、同質領域が安定に求められるパラメータほどエッジも安定に 求めることができる。したがって、同質領域ごとに各パラメータを変換することによって 、エッジの先鋭感やテクスチャ感を保存したまま、画像変換を実行することができる。  [0047] (Equation 1) Force If the parameter force constituting the illumination equation changes as much as the component force, the luminance changes. Therefore, it can be understood that edge detection is more stable for each parameter than for luminance change. In the present embodiment, since different homogeneous regions are adjacent to each other, the edge can be obtained more stably for parameters for which the homogeneous region is obtained more stably. Therefore, by converting each parameter for each homogeneous region, image conversion can be executed while preserving the sharpness and texture of the edges.
[0048] 図 1のフローにもどり、ステップ SOOにおいて、初期設定を行う。ここでは、画像変換 の対象となる第 1の画像を取得するとともに、同質領域判定用の閾値 THEPR、同質 領域マージ判定用の閾値 THMEPR、およびノイズ判定用の閾値 THNを設定する 。これらの閾値の用い方については、後述する。  [0048] Returning to the flow of FIG. 1, initial setting is performed in step SOO. Here, the first image to be subjected to image conversion is acquired, and the threshold value THEPR for homogeneous region determination, the threshold value THMEPR for homogeneous region merge determination, and the threshold value THN for noise determination are set. How to use these threshold values will be described later.
[0049] 第 1のステップとしてのステップ S10において、第 1の画像の各画素について、所定 の照明方程式を構成する複数のパラメータを、それぞれ取得する。ここでは、上述の (数 1)および (数 2)の照明方程式を用いる。ここでは、環境光の輝度 Ia、環境光の反 射率 p a、光源の輝度 Ii、光源ベクトル Lおよび光源の立体角 d coを環境条件と呼び 、拡散反射成分の双方向反射率 p d、鏡面反射成分の双方向反射率 p s、拡散反射 成分比 kd、鏡面反射成分比 ksを光学特性と呼ぶ。これらは、(数 1)に示す照明方程 式に従って、視点方向への反射光の輝度値 Ivを与える。 [0049] In step S10 as the first step, for each pixel of the first image, a predetermined A plurality of parameters constituting the illumination equation are respectively acquired. Here, the illumination equations of (Equation 1) and (Equation 2) described above are used. Here, the ambient light luminance Ia, the ambient light reflectance pa, the light source luminance Ii, the light source vector L, and the solid angle d co of the light source are called environmental conditions, and the bidirectional reflectance pd of the diffuse reflection component, specular reflection The bidirectional reflectance ps of the component, the diffuse reflection component ratio kd, and the specular reflection component ratio ks are called optical characteristics. These give the luminance value Iv of the reflected light in the direction of the viewpoint according to the illumination equation shown in (Equation 1).
[0050] 図 3は (数 1)が前提としている幾何条件を示す概念図である。図 3に示すように、物 体表面 SF上の現在の注目点 Pに放射照度 Ii (N · L) d ωで光源カゝら光が入射し、拡 散反射成分が kd p d、鏡面反射成分が ks p sだけ反射される。環境光とは、物体表 面 SF上の現在の注目点 Pに多重反射等で周辺から回り込んで入射する光であり、 視点方向(ベクトル V)の輝度 Ivのバイアス成分に当たる。  FIG. 3 is a conceptual diagram showing the geometric conditions assumed by (Equation 1). As shown in Fig. 3, light from the light source is incident on the current point of interest P on the object surface SF with irradiance Ii (NL) d ω, diffuse reflection component is kd pd, specular reflection component Is reflected by ks ps. Ambient light is light that enters the current attention point P on the object surface SF from the surroundings by multiple reflections, etc., and hits the bias component of the luminance Iv in the viewing direction (vector V).
[0051] なお、ここで示した照明方程式やパラメータの種類はあくまでも一例であり、本発明 は、照明方程式の構造やパラメータの種類に対して制限を与えるものでなぐこれら は任意である。  [0051] It should be noted that the types of illumination equations and parameters shown here are merely examples, and the present invention does not limit the structure of illumination equations and the types of parameters, and these are arbitrary.
[0052] (数 1)の各パラメータは、被写体からの計測、または、与えられた撮像画像からの 推定などによって、得ることができる。例えば、表面法線ベクトル Nは、三角測量の原 理を用いてレンジファインダ等によって計測できる(例えば、非特許文献 5を参照)。 三角測量の原理とは、 3角形の一辺とその両端の角度が決まると 3角形が一義的に 定まることを利用しており、図 4に示すように、既知の距離 1だけ離れた 2点 A, Bから 点 Pを見る角度をそれぞれ α、 βとすると、点 Ρの座標値 (x、 y)は以下で与えられる  [0052] Each parameter of (Equation 1) can be obtained by measurement from a subject or estimation from a given captured image. For example, the surface normal vector N can be measured by a range finder or the like using the principle of triangulation (see, for example, Non-Patent Document 5). The principle of triangulation is that the triangle is uniquely determined when the angles of one side and both ends of the triangle are determined. As shown in Fig. 4, two points A separated by a known distance 1 , B is the angle at which point P is viewed as α and β, respectively. The coordinate values (x, y) of point Ρ are given by
[数 3] [Equation 3]
[0053] また、拡散反射と鏡面反射とを分離する手法として、例えば非特許文献 6には、鏡 面反射成分が偏光する性質を利用する技術が開示されている。光が物体表面で反 射する場合、光の入反射面に平行な電場成分と垂直な電場成分とでは、通常、フレ ネル係数が異なるため、反射光は偏光する。このため、鏡面反射成分は一般的に偏 光しているが、拡散反射は乱反射であるため、偏光性を持たない。そこで、図 5に示 すように、偏光フィルタ PFを通して反射光 RRを観測した場合、透過光 RRPの強度は 、反射光 RRのうち偏光フィルタ PFの偏光軸 PFAに平行な成分の強度となる。このた め、物体表面 SFからの鏡面反射成分を偏光フィルタ PFを回転させながら観測した 場合、透過光 RRPの強度は偏光フィルタ PFの偏光軸 PFAと鏡面反射の偏光面 SP Pとの間の角度 φに応じて変化し、次式で与えられる。 [0053] As a technique for separating diffuse reflection and specular reflection, for example, Non-Patent Document 6 discloses a technique that utilizes the property that a specular reflection component is polarized. When light is reflected on the surface of an object, the electric field component parallel to the light incident / reflecting surface and the electric field component perpendicular to the light reflection surface are usually Since the channel coefficients are different, the reflected light is polarized. For this reason, the specular reflection component is generally polarized, but diffuse reflection is irregular reflection, and thus has no polarization. Therefore, as shown in FIG. 5, when the reflected light RR is observed through the polarizing filter PF, the intensity of the transmitted light RRP is the intensity of the component parallel to the polarization axis PFA of the polarizing filter PF in the reflected light RR. Therefore, when the specular reflection component from the object surface SF is observed while rotating the polarizing filter PF, the intensity of the transmitted light RRP is the angle between the polarizing axis PFA of the polarizing filter PF and the polarizing plane SP P of the specular reflection. It changes according to φ and is given by the following equation.
 Picture
I人 ψ、 = d + - {Fv φ', ) + FP (θ\ ) - ( (θ', ) - Fp (θ', ))cos 2ψ) I people ψ, = d +-(F v φ ',) + F P (θ \)-((θ',)-F p (θ ',)) cos 2ψ)
[0054] ここで、 Ldは拡散反射成分の輝度、 Lsは鏡面反射成分の輝度、 Θ ' iは微小反射 面での光の入射角、 FP は絶縁体に対する平行電場成分のフレネル係数、 FV は 絶縁体に対する垂直電場成分のフレネル係数である。 [0054] where Ld is the luminance of the diffuse reflection component, Ls is the luminance of the specular reflection component, Θ'i is the incident angle of the light on the minute reflection surface, FP is the Fresnel coefficient of the parallel electric field component to the insulator, and FV is It is the Fresnel coefficient of the vertical electric field component for the insulator.
[0055] 一方、各パラメータを撮影画像力 推定する方法としては、例えば、空間応答特性 と照明方程式パラメータとの対応関係を予め学習しておき、ノ メータを取得する際 に、その学習データを参照する方法が有効である。例えば、図 6に示すように、画像 特徴ベクトルと照明方程式パラメータとの関係を予め学習しておき、画像特徴べタト ルデータベース 502および照明方程式パラメータデータベース 503を準備しておく。 第 1の画像としての入力画像 ΠΝは、画像特徴解析処理 501によって入力画像特徴 ベクトル IINFVに変換される。画像特徴解析処理 501では、例えばウェーブレット変 換等によって空間応答特性を求める。画像特徴ベクトルデータベース 502は入力画 像特徴ベクトル IINFVに最も近 、画像特徴ベクトルを選定し、入力画像特徴ベクトル 番号 IINFVNを出力する。照明方程式パラメータデータベース 503は、入力画像特 徴ベクトル番号 IINFVNを受けて、これに対応する照明方程式パラメータを入力画 像照明方程式パラメータ IINLEPとして出力する。この方法を用いれば、所定の照明 方程式のパラメータを、全て取得することができる。  [0055] On the other hand, as a method for estimating the captured image force of each parameter, for example, the correspondence relationship between the spatial response characteristic and the illumination equation parameter is learned in advance, and the learning data is referred to when the parameter is acquired. The method to do is effective. For example, as shown in FIG. 6, the relationship between the image feature vector and the illumination equation parameter is learned in advance, and an image feature vector database 502 and an illumination equation parameter database 503 are prepared. The input image の as the first image is converted into an input image feature vector IINFV by the image feature analysis processing 501. In the image feature analysis processing 501, the spatial response characteristic is obtained by, for example, wavelet transformation. The image feature vector database 502 selects the image feature vector closest to the input image feature vector IINFV, and outputs the input image feature vector number IINFVN. The illumination equation parameter database 503 receives the input image feature vector number IINFVN and outputs the illumination equation parameter corresponding to this as the input image illumination equation parameter IINLEP. By using this method, it is possible to obtain all the parameters of a predetermined illumination equation.
[0056] なお、本発明は照明方程式のパラメータの計測方法や推定方法に制限を与えるも のではなぐ任意の方法が適応できる。たとえば、表面法線ベクトル Nは、フォトメトリツ クステレオ法により、光源方向の異なる 3枚以上の画像力も得た (数 8)を、一般化逆 行列を用いて(数 9)に変換することによって推定できる(R.J.Woodham, "Photometric method for determining surface orientation rrom multiple images , Optical Engineenn g 19,pp.l39-144 1980年)。 [0056] It should be noted that the present invention can be applied to any method that does not limit the method of measuring and estimating the parameters of the illumination equation. For example, the surface normal vector N is It is possible to estimate by using the generalized inverse matrix (Equation 9), which can obtain 3 or more image powers with different light source directions by the Kustero method (RJWoodham, "Photometric method for determining surface orientation" rrom multiple images, Optical Engineenn g 19, pp.l39-144 1980).
[数 8]  [Equation 8]
v - Lx  v-Lx
[数 9] x = (L'L)- ' ϋν [Equation 9] x = (L'L)-'ϋν
[0057] ここで、ベクトル xは反射率 p dを長さに持つ表面法線ベクトル p dNを撮影回数分 まとめたベクトル、行列 Lは複数の光源ベクトル Lを撮影回数分まとめた光源行列、ベ タトル Vは複数の視点方向への反射光の輝度値 Ivを撮影回数分まとめたベクトルであ る。ただし、物体表面は均等拡散面 (Lambertian面)とし、光源は無限遠にある点光 源と仮定する。また、拡散反射と鏡面反射とを分離する手法は、図 5に示した手法以 外に、たとえば RGB信号から形成される 3次元色空間に分布する拡散反射成分と鏡 面反射成分の分布形状の違 、を利用する方法(S.Tominaga, N.Tanaka, "Estimating reflection parameters from a single color image ,ΙΕΕϋ computer Graphics and Appli cations,vol.20,Issue 5,pp.58— 66, 2000年)などがある。 [0057] Here, the vector x is a vector obtained by collecting the surface normal vector p dN having the reflectance pd in the length for the number of times of photographing, and the matrix L is a light source matrix for collecting the number of light source vectors L for the number of times of photographing, and the vector V is a vector in which the luminance values Iv of reflected light in multiple viewpoint directions are collected for the number of times of shooting. However, the object surface is assumed to be a uniform diffuse surface (Lambertian surface), and the light source is assumed to be a point light source at infinity. In addition to the method shown in Fig. 5, the method of separating diffuse reflection and specular reflection is different from the method shown in Fig. 5, for example, in the distribution shape of diffuse reflection components and specular reflection components distributed in a three-dimensional color space formed from RGB signals. (S. Tominaga, N. Tanaka, “Estimating reflection parameters from a single color image, ΙΕΕϋ computer Graphics and Applications, vol. 20, Issue 5, pp. 58—66, 2000)” is there.
[0058] 次に、第 2のステップとしてのステップ S20において、各パラメータ毎に、当該パラメ ータの値が類似している画素力もなる同質領域を特定する。ここで、ノ メータの類 似性は、複数画素の領域における当該パラメータの分散によって評価する。この分 散値が、ステップ S00で設定された同質領域判定用閾値 THEPRよりも小さ ヽときは 、その複数画素の領域は同質領域と判定され、一方、同質領域判定用閾値 THEPR よりも大きいか等しいときは、同質領域ではないと判定される。この場合は、その領域 の画素全てお互いに異質であるか、または、異なる同質領域が含まれているものと推 定される。いずれの場合も、エッジが含まれていると考えられるので、エッジの先鋭感 やテクスチャ感を保存するために、同質領域に含まれな 、画素には処理をカ卩えな ヽ [0059] 例えば、表面法線ベクトルの場合、同質であるときはベクトルの角度差が小さくなる 。そこで、同質領域判定用閾値 THEPRを例えば 0. 5度と定め、分散値が 0. 5度より も小さいときは同質領域と判定し、 0. 5度よりも大きいか等しいときは、異質と判定す る。また、拡散反射成分比 kdは比率であり、 0から 1の値を取るので、同質領域判定 用閾値 THEPRを例えば 0. 01と定める。そして、分散値が 0. 01よりも小さいときは 同質領域と判定し、分散値が 0. 01よりも大きいかまたは等しいときは、異質と判定す る。 [0058] Next, in step S20 as the second step, for each parameter, a homogeneous region having a similar pixel value and a similar pixel value is specified. Here, the similarity of the meters is evaluated by the variance of the parameters in a plurality of pixel areas. When this variance value is smaller than the homogeneous region determination threshold THEPR set in step S00, the region of the plurality of pixels is determined to be a homogeneous region, and on the other hand, it is greater than or equal to the homogeneous region determination threshold THEPR. When it is determined that the region is not homogeneous. In this case, it is presumed that all the pixels in the area are different from each other or that different homogeneous areas are included. In any case, since it is considered that the edge is included, in order to preserve the sharpness and texture of the edge, it is not included in the homogeneous area. [0059] For example, in the case of surface normal vectors, the angle difference between the vectors becomes smaller when they are homogeneous. Therefore, the homogeneous region determination threshold THEPR is set to 0.5 degrees, for example, and when the variance value is smaller than 0.5 degrees, it is determined as a homogeneous area, and when it is greater than or equal to 0.5 degrees, it is determined as heterogeneous. The The diffuse reflection component ratio kd is a ratio and takes a value from 0 to 1. Therefore, the homogeneous region determination threshold THEPR is set to, for example, 0.01. When the variance value is smaller than 0.01, it is determined as a homogeneous region, and when the variance value is greater than or equal to 0.01, it is determined as a heterogeneous region.
[0060] パラメータの類似性を判定する複数画素の領域の設定は、任意であるが、ここでは 、縦 5画素 X横 5画素力もなる単位領域を用いる(S21)。この場合、図 7に示すような P01〜P28までの 28種類の判定を行えば、単位領域 UA内のすべてのパターン形 状に対して同質領域を抽出できる。ただし、(1)同質領域に含まれる画素はすべて 隣接し合うこと、(2)単位領域 UAの中心画素を必ず含むこと、が条件になる。 28種 類すベてのパターンにおいて 2段階で判定を行う。まず、 3 X 3の中心エリア CAにお いて、 9個の画素のうち、グレーの 3個の画素について同質であるか否かを判定する 。次に、同質と判定されたパターンについては、中心エリア CAの外側にあるハッチの 付された画素を含めて同質であるか否かを判定する。複数のパターンが同質領域と して判定された場合は、それらの和を取って同質領域とする。  The setting of the area of the plurality of pixels for determining the similarity of the parameters is arbitrary, but here, a unit area having 5 pixels in the vertical direction and 5 pixels in the horizontal direction is used (S21). In this case, if 28 types of determinations from P01 to P28 as shown in FIG. 7 are performed, homogeneous regions can be extracted for all pattern shapes in the unit region UA. However, (1) all the pixels included in the homogeneous area are adjacent to each other, and (2) the central pixel of the unit area UA must be included. Judgment is made in two stages for all 28 patterns. First, in the central area CA of 3 × 3, it is determined whether or not three gray pixels out of nine pixels are homogeneous. Next, with respect to the pattern determined to be homogeneous, it is determined whether or not the pattern is homogeneous including the hatched pixels outside the central area CA. When multiple patterns are determined as homogeneous regions, the sum of them is taken as the homogeneous region.
[0061] このような処理によって、特定した単位領域毎に、同質領域を認識することができる  [0061] By such processing, a homogeneous region can be recognized for each specified unit region.
(S22)。同質領域が新たに認識されると(S22で Yes)、この新たな同質領域を追カロ すべぐ同質領域データを更新する(S23)。全ての単位領域について判定が済むま で、ステップ S21〜S23を繰り返し実行する(S24)。図 8に示すように、縦 5画素 X横 5画素の単位領域 UAを水平と垂直に 1ライン重なり合うように走査していけば、単位 領域 UAで生成された同質領域同士が接合されて、画像全体まで広げられる。  (S22). When a homogeneous region is newly recognized (Yes in S22), the homogeneous region data that should follow the new homogeneous region is updated (S23). Steps S21 to S23 are repeatedly executed until the determination is completed for all unit areas (S24). As shown in Fig. 8, if the unit area UA of 5 pixels x 5 pixels is scanned so that one line overlaps horizontally and vertically, the homogeneous regions generated in the unit area UA are joined together, and the image It can be expanded to the whole.
[0062] 次にステップ S25において、隣接する単位領域においてそれぞれ認識された複数 の同質領域の類似性を評価し、類似する同質領域をマージする。同質領域の類似 性の評価方法は任意である力 例えば、単位領域ごとにパラメータ値の平均値を求 め、この平均値の差分値を用いて判断すればよい。すなわち、差分値がステップ SO 0で設定した同質領域マージ判定用閾値 THMEPRよりも小さ ヽときは、同質領域同 士をマージする。 [0062] Next, in step S25, the similarity between a plurality of homogeneous regions recognized in adjacent unit regions is evaluated, and similar homogeneous regions are merged. The method of evaluating the similarity of homogeneous regions is arbitrary. For example, an average value of parameter values may be obtained for each unit region, and a determination may be made using the difference value of the average values. In other words, if the difference value is smaller than the homogeneous region merge determination threshold THMEPR set in step SO 0, the homogeneous region is identical. To merge.
[0063] 次にステップ S26において、同質領域内のノイズの有無を判定する。この判定は例 えば、同質領域内における全画素のパラメータ値の平均値を基準にし、ある画素の ノ ラメータ値とこの平均値との差分力 ステップ SOOで設定したノイズ判定用閾値 TH Nよりも大きいとき、これをノイズと判定する。表面法線ベクトルは、ノイズであるとき、 ベクトルの角度の平均値との差分が大きくなる。そこで、ノイズ判定用閾値 THNを例 えば 30度と定め、平均値との差分が 30度よりも大きいとき、ノイズと判定する。また、 拡散反射成分比 kdに関しては、ノイズ判定用閾値 THNを例えば 0. 2と定め、平均 値との差分が 0. 2よりも大きいとき、ノイズと判定する。  Next, in step S26, it is determined whether or not there is noise in the homogeneous region. For example, this determination is based on the average value of the parameter values of all pixels in the homogeneous region, and the difference between the parameter value of a pixel and this average value is larger than the noise determination threshold TH N set in step SOO. Sometimes this is determined as noise. When the surface normal vector is noise, the difference from the average value of the vector angles becomes large. Therefore, the noise determination threshold THN is set to 30 degrees, for example, and when the difference from the average value is larger than 30 degrees, it is determined as noise. Regarding the diffuse reflection component ratio kd, the noise determination threshold THN is set to 0.2, for example, and when the difference from the average value is larger than 0.2, it is determined to be noise.
[0064] そして、同質領域がノイズを含むと判定したとき(S26で Yes)、ステップ S27にお!/ヽ て、同質領域内のノイズを除去する。図 9はノイズ除去の一例であり、グレーの画素が 同質領域であり、 PI, P2がノイズと判定された画素を示す。例えば、ノイズと判定さ れた画素の周辺 8画素のうち、同質領域に含まれる画素のパラメータ値の平均値を 求め、これをノイズと置き換える。画素 P1の場合、周辺 8画素すべてが同質領域に属 するので、周辺 8画素すベてのパラメータ値の平均値で置き換える。一方、画素 P2 の場合、周辺 8画素のうち 2個の画素が同質領域に属するため、この 2画素の平均値 で置き換える。なお、ここで説明したノイズ除去の方法は一例であり、任意の方法を 用いてもかまわない。  [0064] When it is determined that the homogeneous region includes noise (Yes in S26), the noise in the homogeneous region is removed in step S27! Fig. 9 shows an example of noise removal. Gray pixels are homogeneous regions, and PI and P2 are pixels that are determined to be noise. For example, the average value of the parameter values of the pixels included in the homogeneous region is obtained from the 8 pixels around the pixel determined to be noise, and this is replaced with noise. In the case of pixel P1, since all the surrounding 8 pixels belong to the homogeneous region, replace with the average value of the parameter values of all the surrounding 8 pixels. On the other hand, in the case of pixel P2, two of the eight surrounding pixels belong to the homogeneous region, so they are replaced with the average value of these two pixels. Note that the noise removal method described here is merely an example, and any method may be used.
[0065] ステップ S20の結果、同質領域に該当しない画素は、エッジを形成する。  [0065] As a result of step S20, pixels that do not fall within the homogeneous region form edges.
[0066] そして、第 3のステップとしてのステップ S30にお!/、て、各パラメータ毎に、所定の画 像変換の内容に応じて、ステップ S20で特定した同質領域毎に、当該パラメータの変 換処理を行う。  [0066] Then, in step S30 as the third step, for each parameter, the parameter is changed for each homogeneous region identified in step S20 according to the content of the predetermined image conversion. Perform the conversion process.
[0067] 図 10は画像変換として画像拡大を行う場合の処理を示す概念図である。図 10〖こ 示すように、画像拡大を行う場合は、同質領域内でパラメータを高密化する。図 10 (a )は変換前のパラメータの分布を示しており、パラメータ値の平均が P1である同質領 域 AE1と、パラメータ値の平均が P2である同質領域 AE2とが隣接している。そして、 同質領域 AE1と AE2との境界に位置する画素 SI, S 2の輝度差力 エッジを形成し ている。いま、図 10 (a)の分布を例えば 2倍に画像拡大するためには、図 10 (b)に示 すように、各黒丸の画素の間に白丸の画素を挿入すればいい。白丸の画素のパラメ ータ値は、例えば隣接する黒丸の画素のパラメータ値とする。また、画素 SI, S2の 間には、どちらかのパラメータ値をそのままコピーして新たな画素 S3を生成すればよ い。図 10 (b)では、画素 S1のパラメータ値を画素 S3にコピーし、画素 S2, S3の輝度 差を図 10 (a)における画素 SI, S2の輝度差に一致させている。これにより、エッジは 保存される。 FIG. 10 is a conceptual diagram showing processing when image enlargement is performed as image conversion. As shown in Fig. 10, when enlarging the image, the parameters are made dense within the homogeneous region. Figure 10 (a) shows the distribution of the parameters before conversion. The homogeneous region AE1 where the average parameter value is P1 is adjacent to the homogeneous region AE2 where the average parameter value is P2. Then, the luminance difference edge of the pixels SI and S 2 located at the boundary between the homogeneous regions AE1 and AE2 is formed. In order to enlarge the image of Fig. 10 (a), for example, by a factor of 2, it is shown in Fig. 10 (b). In this way, a white circle pixel may be inserted between each black circle pixel. The parameter value of the white circle pixel is, for example, the parameter value of the adjacent black circle pixel. In addition, a new pixel S3 may be generated by copying either parameter value as it is between the pixels SI and S2. In FIG. 10 (b), the parameter value of the pixel S1 is copied to the pixel S3, and the luminance difference between the pixels S2 and S3 is made to match the luminance difference between the pixels SI and S2 in FIG. 10 (a). This preserves the edges.
[0068] なお、同質領域でな 、部分は、すべてエッジとして扱えばょ 、。例えば、図 2 (e)の 同質領域 AC1と AC2との間にはさまれた部分には、画素と画素の間に 10の区間が 存在するが、これら全てが 1つ 1つエッジであると捉える。高密化の方法は、図 10の 画素 S3と同様に、隣接画素のパラメータ値をコピーすればよい。例えば、高密化す る位置の左側の画素からコピーしたり、右側からコピーしたり、または、 1区間おきに 左側、右側を切り替えてコピーしたりしてもよ 、。  [0068] It should be noted that all parts that are not homogeneous regions should be treated as edges. For example, in the part sandwiched between homogeneous regions AC1 and AC2 in Fig. 2 (e), there are 10 sections between pixels, but all of these are edges one by one. Capture. For the densification method, the parameter values of adjacent pixels may be copied in the same manner as the pixel S3 in FIG. For example, you can copy from the pixel on the left side of the position where you want to increase the density, copy from the right side, or switch between the left side and the right side every other section.
[0069] 図 11は画像変換として画像縮小を行う場合の処理を示す概念図である。図 11に 示すように、画像縮小を行う場合は、同質領域内でパラメータを低密化する。低密化 の方法は任意であるが、図 11では一例として、周辺画素のパラメータ値の平均値を 用いている。図 11 (a)は変換前のパラメータの分布を示しており、パラメータ値の平 均が P1である同質領域 AF1と、パラメータ値の平均が P2である同質領域 AF2とが 隣接している。そして、同質領域 AF1と AF2との境界に位置する画素 S6, S7の輝度 差力 エッジを形成している。図 11 (a)の分布を例えば 1Z2に画像縮小して、図 11 ( b)のような分布を生成する。同質領域 AF1では、画素群 SG1におけるパラメータ値 の平均値を画素 S4のパラメータ値とし、画素群 SG2におけるパラメータ値の平均値 を画素 S5のパラメータ値とし、低密化を実現する。このとき、画素群 SG1と画素群 SG 2とを一部重複させることによって、縮小画像のパラメータ値の変化を滑らかにしてい る。図 11 (a)におけるエッジである画素 S6, S7の輝度差は、図 11 (b)における画素 S7, S8の輝度差として保存する。すなわち、画素 S8のパラメータ値は画素 S6からコ ピーする。  FIG. 11 is a conceptual diagram showing a process when image reduction is performed as image conversion. As shown in Fig. 11, when the image is reduced, the parameters are reduced in the homogeneous region. Although the density reduction method is arbitrary, in Fig. 11, the average value of the parameter values of the surrounding pixels is used as an example. Figure 11 (a) shows the distribution of the parameters before conversion. The homogeneous region AF1 where the average parameter value is P1 is adjacent to the homogeneous region AF2 where the average parameter value is P2. Then, the luminance difference edge of the pixels S6 and S7 located at the boundary between the homogeneous regions AF1 and AF2 is formed. The distribution shown in Fig. 11 (a) is reduced to 1Z2, for example, to generate the distribution shown in Fig. 11 (b). In the homogeneous region AF1, the average value of the parameter values in the pixel group SG1 is set as the parameter value of the pixel S4, and the average value of the parameter values in the pixel group SG2 is set as the parameter value of the pixel S5, thereby realizing a reduction in density. At this time, by partially overlapping the pixel group SG1 and the pixel group SG2, the change in the parameter value of the reduced image is smoothed. The luminance difference between the pixels S6 and S7 which are the edges in FIG. 11A is stored as the luminance difference between the pixels S7 and S8 in FIG. That is, the parameter value of pixel S8 is copied from pixel S6.
[0070] そして、第 4のステップとしてのステップ S40において、ステップ S30における変換 処理後の各パラメータを用いて、所定の画像変換の後の第 2の画像の各画素の輝度 を求める。このとき、(数 1)の照明方程式に各パラメータを与えて、各画素毎に、反射 光強度 Ivを算出すればょ 、。 [0070] Then, in step S40 as the fourth step, the luminance of each pixel of the second image after the predetermined image conversion using each parameter after the conversion processing in step S30 Ask for. At this time, if each parameter is given to the lighting equation of (Equation 1), the reflected light intensity Iv is calculated for each pixel.
[0071] 以上のように本実施形態によると、輝度を照明方程式パラメータに分解して、各パラ メータ毎に画素間の相関を利用して、画像拡大や画像縮小などの画像変換を行う。 すなわち、画像変換は、各パラメータに対して、同質領域毎に実行されるため、エツ ジ部は同質領域間の境界条件として保存される。また、同質領域は、被写体の物理 的特性である照明方程式パラメータの類似性に基づ 、て特定されるため、 、わば、 物理的裏付けをもって定められたものとなる。したがって、エッジの先鋭さやテクスチ ャ感を保存したまま、画質の安定した画像変換を実現することができる。  As described above, according to the present embodiment, the luminance is decomposed into illumination equation parameters, and image conversion such as image enlargement or image reduction is performed using the correlation between pixels for each parameter. That is, since image conversion is executed for each parameter for each homogeneous region, the edge portion is stored as a boundary condition between the homogeneous regions. In addition, since the homogeneous region is specified based on the similarity of the illumination equation parameters that are physical characteristics of the subject, it is determined with physical support. Therefore, image conversion with stable image quality can be realized while preserving the sharpness of the edge and the texture.
[0072] なお、ステップ S 10を実行するパラメータ取得部と、ステップ S20を実行する同質領 域特定部と、ステップ S30を実行するパラメータ変換部と、ステップ S40を実行する輝 度算出部とを備えた画像変換装置を、構成してもよい。  [0072] It should be noted that a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a luminance calculation unit that executes step S40 are provided. An image conversion apparatus may be configured.
[0073] <照明方程式の他の例 >  [0073] <Other examples of lighting equations>
なお、 本発明で用いる照明方程式は、本実施形態で示したものに限られるもので はなぐ例えば、次のようなものを用いてもよい。  The illumination equation used in the present invention is not limited to that shown in the present embodiment. For example, the following may be used.
[数 5]  [Equation 5]
Ιν = Ιν_α + Ι ΐ)άω- Ρά Ι ν = Ι ν _ α + Ι ΐ) άω- Ρά
[数 6] [Equation 6]
Ιν - Ιν α + IJV . L)dm . p Ν νν α + IJV .L) dm. P
[数 7] [Equation 7]
[0074] (数 5)は拡散反射物体を対象にしたものであり、パラメータは 6個である。ただし、 Iv , aは周辺力 視線方向への光強度を表わす。(数 6)は拡散反射と鏡面反射を分け ないものであり、パラメータは 5個である。(数 7)は反射率を考慮しないものであり、パ ラメータは 2個である。ただし、 Iv, iは注目画素力も視線方向への光強度を表わす。 [0074] (Equation 5) is for a diffusely reflecting object, and has 6 parameters. Where Iv, a represents the light intensity in the direction of the peripheral force line of sight. (Equation 6) does not distinguish between diffuse reflection and specular reflection, and there are five parameters. (Equation 7) does not consider the reflectivity, and there are two parameters. However, Iv, i represents the pixel power of interest and the light intensity in the line-of-sight direction.
[0075] (第 2の実施形態) 本発明の第 2の実施形態では、所定の画像変換として、画像圧縮を行うものとする 。基本的な処理の流れは第 1の実施形態と同様であり、ステップ S30において、画像 圧縮のために、各パラメータを圧縮符号化する処理を行う。この場合、通常は、ステツ プ S40は実行されずに、圧縮された画像データを転送したり記録したりする。そして、 画像を再生する場合は、各パラメータをそれぞれ復号ィ匕し、各画素の輝度を算出す る。本実施形態に係る画像変換方法も、第 1の実施形態と同様に、当該方法を実現 するためのプログラムをコンピュータに実行させることによって、実現することができる 。また、ステップ S 10を実行するパラメータ取得部と、ステップ S20を実行する同質領 域特定部と、ステップ S30を実行するパラメータ圧縮部とを備えた画像圧縮装置を、 構成してちょい。 [0075] (Second Embodiment) In the second embodiment of the present invention, image compression is performed as the predetermined image conversion. The basic processing flow is the same as in the first embodiment. In step S30, processing for compression-encoding each parameter is performed for image compression. In this case, normally, the compressed image data is transferred or recorded without executing step S40. When reproducing an image, each parameter is decoded and the luminance of each pixel is calculated. Similarly to the first embodiment, the image conversion method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method. Further, an image compression apparatus including a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, and a parameter compression unit that executes step S30 may be configured.
[0076] 図 12は本実施形態におけるパラメータ変換処理を示す概念図である。図 12 (a)に おいて、白丸は同質領域 AG1〜AG3に属する画素のパラメータ値を表し、ハッチが 付された丸は同質領域に属さない画素のパラメータ値を表す。図 12 (a)に示すように 、各同質領域 AG1〜AG3ではパラメータ値はほぼ均等であり、このため、パラメータ 値に関する情報量はほとんど平均値に集約されている。したがって、各同質領域 AG 1〜AG3では、パラメータ値の平均値、および各画素のパラメータ値と平均値との差 分を符号化するものとし、かつ、差分には少量の符号量を割り当てるようにする。これ により、画質を損なうことなぐ少ない符号量で、ノ メータ値の圧縮符号ィ匕を行うこと ができる。  FIG. 12 is a conceptual diagram showing parameter conversion processing in the present embodiment. In Fig. 12 (a), the white circles represent the parameter values of the pixels belonging to the homogeneous regions AG1 to AG3, and the hatched circles represent the parameter values of the pixels not belonging to the homogeneous region. As shown in Fig. 12 (a), the parameter values are almost equal in the homogeneous regions AG1 to AG3, and therefore the amount of information related to the parameter values is almost integrated into the average value. Therefore, in each homogeneous region AG1 to AG3, the average value of the parameter value and the difference between the parameter value and the average value of each pixel are encoded, and a small amount of code is assigned to the difference. To do. As a result, it is possible to perform compression coding of the meter value with a small amount of code without impairing the image quality.
[0077] 例えば、図 12 (b)の符号ィ匕列に示すように、同質領域 AG1に関して、まず、符号ィ匕 タイプ TP1を宣言し (ここでは「平均値との差分」)、次に平均値 Dl、各画素での平均 値との差分 D2と続け、最後に区切り信号 SG1を付す。なお、区切り信号 SG1を付す 代わりに、符号ィ匕タイプとして特別な符号を割り当て、区切りが認識できるようにしても よい。また、差分 D2が無視できるほど小さい場合は、ランレングス符号ィ匕を適用して ちょい。  [0077] For example, as shown in the code sequence in Fig. 12 (b), for the homogeneous region AG1, first, the code type TP1 is declared (here, "difference from the average value"), and then the average The value Dl and the difference D2 from the average value at each pixel are followed by D2, followed by the separator signal SG1. Instead of attaching the separation signal SG1, a special code may be assigned as the code type so that the separation can be recognized. If the difference D2 is so small that it can be ignored, apply the run-length code.
[0078] 同質領域に属さない画素に関しては、ノ メータ値が不規則に変化しているので、 平均値との差分で符号ィ匕してもデータ量の圧縮は期待できない。そこで、たとえ «JP EG (Joint Photographic Experts uroup)や MPEG (Moving Picture Experts uroup) に採用されている直交変換などを用いればいい。すなわち、符号ィ匕タイプ TP2として[0078] For pixels that do not belong to the homogeneous region, the value of the meter changes irregularly, so compression of the amount of data cannot be expected even if it is encoded by the difference from the average value. So, even «JP EG (Joint Photographic Experts uroup) and MPEG (Moving Picture Experts uroup) It is sufficient to use the orthogonal transformation adopted in the above. That is, as the sign key type TP2
「直交変換」を宣言し、周波数係数 D3を第 1周波数項力 順に符号ィ匕していく。なおDeclare “orthogonal transformation” and sign the frequency coefficient D3 in order of the first frequency term. In addition
、同質領域が画像中のほとんどの範囲を占める場合は、同質領域に属さない画素の ノ ラメータ値をそのまま符号ィ匕しても力まわな ヽ。 If the homogeneous region occupies most of the range in the image, it is not a problem to code the parameter values of pixels that do not belong to the homogeneous region as they are.
[0079] 区切り信号 SG2の後、同質領域 AG2, AG3に関しては、同質領域 AG1と同様に、 符号化タイプ TP3, TP4として「平均値との差分」を宣言する。 [0079] After the delimiter signal SG2, the homogeneous regions AG2 and AG3 declare "difference from the average value" as the encoding types TP3 and TP4 as in the homogeneous region AG1.
[0080] 以上のように本実施形態によると、輝度値を構成するパラメータに分解して、近傍 画素との相関を求めることによって、輝度値よりも高い相関が期待でき、したがって、 圧縮効率を向上させることができる。また、同質領域ごとに圧縮符号化を行うので、先 鋭感やテクスチャ感を保存したまま、輝度値ベースよりも高 ヽ圧縮率を実現できる。  As described above, according to the present embodiment, a higher correlation than the luminance value can be expected by decomposing the luminance value into parameters constituting the luminance value and obtaining the correlation with the neighboring pixels, thus improving the compression efficiency. Can be made. In addition, since compression coding is performed for each homogeneous region, it is possible to achieve a higher compression ratio than the luminance value base while preserving sharpness and texture.
[0081] (第 3の実施形態)  [0081] (Third embodiment)
本発明の第 3の実施形態では、上述したような画像変換方法を、コンピュータグラフ イツタスにおけるテクスチャマッピングに適用する。  In the third embodiment of the present invention, the image conversion method as described above is applied to texture mapping in computer graph status.
[0082] 図 13はレンダリング処理の主な流れを示すフローチャートである。レンダリング処理 とは、コンピュータグラフィックスにおいて、コンピュータ内に生成した 3次元モデルを 2次元の画像データに変換する処理のことである(例えば非特許文献 1の pp. 79を 参照)。図 13に示すように、レンダリング処理は、視点と光源の設定 S101、座標変換 S102、陰面消去 S 103、シェーディングとシャドーイング S 104、テクスチャマッピング S105、およびビューポート変換 S 106力 主なステップとなる。  FIG. 13 is a flowchart showing the main flow of the rendering process. The rendering process is a process of converting a three-dimensional model generated in a computer into two-dimensional image data in computer graphics (see, for example, pp. 79 of Non-Patent Document 1). As shown in Figure 13, the rendering process is the main step S101, coordinate transformation S102, hidden surface removal S 103, shading and shadowing S 104, texture mapping S105, and viewport transformation S 106 .
[0083] まずステップ S101において、視点 VAと光源 LSが設定されると、見え方が決まる。  First, in step S101, when the viewpoint VA and the light source LS are set, the appearance is determined.
次にステップ S102において、ローカル座標系で管理されていた各オブジェクトが正 規座標系にまとめられ、ステップ S103において、視点から見えない陰面部が削除さ れる。そしてステップ S104において、光源 LS力もオブジェクト OA、 OBへの光のあ たり方が計算され、陰 Shadeと影 Shadowが生成される。  Next, in step S102, the objects managed in the local coordinate system are grouped into a regular coordinate system, and in step S103, the hidden surface portion that cannot be seen from the viewpoint is deleted. Then, in step S104, the light source LS force is also calculated as how light strikes the objects OA and OB, and a shade and a shadow are generated.
[0084] そして、ステップ S 105にお!/、てテクスチャマッピングを行 、、オブジェクト OA、 OB に対するテクスチャ TA、 TBを生成する。テクスチャは画像データと取得するのがー 般的であり、テクスチャ画像 TIAをオブジェクト OAの形状に合わせて変形し、ォブジ ェクト OA上に合成する。同様に、テクスチャ画像 TIBをオブジェクト OBの形状に合 わせて変形し、オブジェクト OB上に合成する。 In step S 105, texture mapping is performed to generate textures TA and TB for the objects OA and OB. The texture is generally acquired as image data. The texture image TIA is deformed according to the shape of the object OA and is synthesized on the object OA. Similarly, the texture image TIB is matched to the shape of the object OB. Then transform it and compose it on the object OB.
[0085] 本実施形態では、このテクスチャマッピングにおいて、上述したような画像変換を適 用する。すなわち、まず、テクスチャ画像 TIA, TIBを、 3次元 CGモデルのオブジェク ト OA, OBに貼り付ける前処理を行う。そして、図 1のフローに従って処理を行う。ステ ップ S 10では、 2次元テクスチャ画像 TIA, TIBの光学パラメータと、オブジェクト OA , OBの表面法線ベクトルとを用いて、オブジェクト OA, OBに貼り付けられたテクスチ ャ画像 TIA, TIBの各画素について、パラメータを取得する。以降の処理は、第 1の 実施形態と同様である。なお、本実施形態に係るテクスチャマッピング方法も、当該 方法を実現するためのプログラムをコンピュータに実行させることによって、実現する ことができる。また、上述の前処理を行う前処理部と、ステップ S 10を実行するパラメ ータ取得部と、ステップ S 20を実行する同質領域特定部と、ステップ S30を実行する ノ ラメータ変換部と、ステップ S40を実行する輝度算出部とを備えたテクスチャマツピ ング装置を、構成してもよい。  In this embodiment, the image conversion as described above is applied in this texture mapping. That is, first, pre-processing for pasting the texture images TIA and TIB to the objects OA and OB of the 3D CG model is performed. Then, processing is performed according to the flow of FIG. In step S10, each of the texture images TIA and TIB pasted to the objects OA and OB using the optical parameters of the two-dimensional texture images TIA and TIB and the surface normal vectors of the objects OA and OB. The parameter is acquired for the pixel. The subsequent processing is the same as in the first embodiment. Note that the texture mapping method according to the present embodiment can also be realized by causing a computer to execute a program for realizing the method. In addition, a preprocessing unit that performs the above-described preprocessing, a parameter acquisition unit that executes step S10, a homogeneous region specifying unit that executes step S20, a parameter conversion unit that executes step S30, and a step A texture mapping device including a luminance calculation unit that executes S40 may be configured.
[0086] 最後に、ステップ S 106において、ビューポート変換を行い、表示されるスクリーン S CNまたはウィンドウ WNDに合わせた画像サイズの 2次元画像を生成する。  [0086] Finally, in step S106, viewport conversion is performed to generate a two-dimensional image having an image size matching the displayed screen SCN or window WND.
[0087] ここで、レンダリング処理は、視点や光源の位置が変わるために実行する必要があ り、ゲーム機器のようなインタラクティブシステムでは、頻繁にレンダリング処理が繰り 返される。テクスチャマッピングでは通常、物体表面に貼り付けるテクスチャデータを 画像として準備するので、視点や光源が変わると、そのたびに、テクスチャデータを 拡大、縮小、回転、色換え等によって変換する必要がある。  Here, the rendering processing needs to be executed because the viewpoint and the position of the light source change, and the rendering processing is frequently repeated in an interactive system such as a game device. In texture mapping, texture data to be pasted on the object surface is usually prepared as an image. Therefore, whenever the viewpoint or light source changes, it is necessary to convert the texture data by enlarging, reducing, rotating, or changing colors.
[0088] そこで本実施形態のように、パラメータ毎に画像変換を実施すれば、テクスチャ感 を保存したまま、様々な視点や光源設定に応じたテクスチャマッピングを実現すること ができる。特に、光源の位置が変わった場合、テクスチャの変化を輝度値ベースで算 出することは困難なので、本実施形態のように光源ベクトルを直接制御できる方法は 、従来に比べて原理的に優位であるといえる。  Therefore, if image conversion is performed for each parameter as in the present embodiment, texture mapping according to various viewpoints and light source settings can be realized while preserving the texture. In particular, when the position of the light source is changed, it is difficult to calculate the texture change on the basis of the luminance value. Therefore, the method that can directly control the light source vector as in this embodiment is superior to the conventional method in principle. It can be said that there is.
[0089] 以下、本発明を実現する構成例を例示する。  Hereinafter, a configuration example for realizing the present invention will be illustrated.
[0090] (第 1の構成例)  [0090] (First configuration example)
図 14は第 1の構成例を示す図であり、パーソナルコンピュータを用いて本発明に係 る画像変換を行う構成の一例である。カメラ 101の解像度はディスプレイ 102の解像 度よりも低ぐディスプレイ 102の表示能力を最大限に生かすために、メインメモリ 10 3にロードされた画像変換プログラムによって拡大画像を作成する。カメラ 101で取り 込まれた低解像度画像は画像メモリ 104に記録される。外部記憶装置 105には予め 、図 6に示したような画像特徴ベクトルデータベース 502および照明方程式パラメ一 タデータベース 503が準備されており、メインメモリ 103の画像変換プログラム力も参 照可能になっている。 FIG. 14 is a diagram showing a first configuration example, which is related to the present invention using a personal computer. 2 is an example of a configuration for performing image conversion. The resolution of the camera 101 is lower than the resolution of the display 102. In order to take full advantage of the display capability of the display 102, an enlarged image is created by an image conversion program loaded in the main memory 103. The low resolution image captured by the camera 101 is recorded in the image memory 104. An image feature vector database 502 and an illumination equation parameter database 503 as shown in FIG. 6 are prepared in advance in the external storage device 105, and the image conversion program capability of the main memory 103 can be referred to.
[0091] 画像変換プログラムによる処理は、第 1の実施形態と同様であり、照明方程式パラメ ータごとに同質領域を判定し、同質領域内で高密化する。すなわち、メモリバス 106 を介して画像メモリ 104の低解像度画像を読み込み、これをディスプレイ 102の解像 度に合わせて拡大し、再びメモリバス 106経由でビデオメモリ 107に転送する。ビデ オメモリ 107に転送された拡大画像は、ディスプレイ 102に表示される。  The processing by the image conversion program is the same as in the first embodiment, and a homogeneous region is determined for each illumination equation parameter, and densification is performed in the homogeneous region. That is, a low-resolution image in the image memory 104 is read via the memory bus 106, enlarged in accordance with the resolution of the display 102, and transferred again to the video memory 107 via the memory bus 106. The enlarged image transferred to the video memory 107 is displayed on the display 102.
[0092] なお、本発明は図 14の構成に拘束されるものではなぐ様々な構成をとることがで きる。例えば、照明方程式パラメータは、計測器によって被写体から直接計測しても かまわない。この場合、外部記憶装置 105の画像特徴ベクトルデータベース 502お よび照明方程式パラメータデータベース 503は、必要でなくなる。また低解像度画像 をネットワーク 108から取得しても力まわない。また、外部記憶装置 105にテクスチャ データを保持し、メインメモリ 103にお ヽて第 3の実施形態で示したテクスチャマツピ ングを実行してもかまわな 、。  It should be noted that the present invention can take various configurations other than those constrained by the configuration of FIG. For example, the illumination equation parameters may be measured directly from the subject by a measuring instrument. In this case, the image feature vector database 502 and the illumination equation parameter database 503 of the external storage device 105 are not necessary. Also, acquiring low resolution images from the network 108 does not help. It is also possible to store the texture data in the external storage device 105 and execute the texture mapping shown in the third embodiment in the main memory 103.
[0093] また、カメラ 101の解像度がディスプレイ 102の解像度よりも高い場合は、メインメモ リ 103にロードされた画像変換プログラムは、第 1の実施形態で示したように画像縮 小を行えばよい。また、第 2の実施形態に従って画像圧縮を行ってもよぐこの場合は 、照明方程式パラメータをデータ圧縮して、ネットワーク 108等力も送信することがで きる。  [0093] If the resolution of the camera 101 is higher than the resolution of the display 102, the image conversion program loaded in the main memory 103 may perform image reduction as shown in the first embodiment. . In this case, the image compression may be performed according to the second embodiment, and the illumination equation parameters are data-compressed and the network 108 isotropic force can be transmitted.
[0094] また、カメラ 101としては、カメラ付携帯電話やデジタルスチルカメラ、ビデオムービ 一力メラなど任意のタイプの撮像装置が適用できる。さらに、予め録画した映像を再 生する再生装置においても、本発明を実現することができる。  [0094] As the camera 101, any type of imaging device such as a camera-equipped mobile phone, a digital still camera, or a video movie, can be applied. Furthermore, the present invention can be realized in a playback device that plays back pre-recorded video.
[0095] (第 2の構成例) 図 15は第 2の構成例を示す図であり、サーバークライアントシステムを用いて本発 明に係る画像変換を行う構成の一例である。カメラ 201の解像度はディスプレイ 202 の解像度よりも低ぐディスプレイ 202の表示能力を最大限に生かすために、サーバ 一クライアントシステムにおいて画像拡大を実行する。サーバー 301は、図 6と同様に 、画像特徴解析部 501、画像特徴ベクトルデータベース 502および照明方程式パラ メータデータベース 503を備えており、入力画像 ΠΝ力 照明方程式パラメータ IINL EPを算出してパラメータ操作部 205に出力する。この動作は、図 1のフローにおける ステップ S10に相当する。画像特徴解析部 501、画像特徴ベクトルデータベース 50 2および照明方程式パラメータデータベース 503によって、パラメータ取得部が構成 されている。 [0095] (Second configuration example) FIG. 15 is a diagram showing a second configuration example, which is an example of a configuration for performing image conversion according to the present invention using a server client system. The resolution of the camera 201 is lower than the resolution of the display 202. In order to make the best use of the display capability of the display 202, image enlargement is executed in the server-client system. Similarly to FIG. 6, the server 301 includes an image feature analysis unit 501, an image feature vector database 502, and an illumination equation parameter database 503. The server 301 calculates the input image repulsive illumination equation parameter IINL EP and sets the parameter operation unit 205. Output to. This operation corresponds to step S10 in the flow of FIG. The image feature analysis unit 501, the image feature vector database 502, and the illumination equation parameter database 503 constitute a parameter acquisition unit.
[0096] 一方、クライアント 302の画像変換指示部 203から、画像変換の指示が画像変換指 示信号 ICISとしてサーバー 301のパラメータ操作指示部 204に渡される。ノラメータ 操作指示部 204は、画像変換指示信号 ICISによる画像変換の内容を照明パラメ一 タの操作内容に置き換え、パラメータ操作指示信号 LEPSとしてパラメータ操作部 20 5に出力する。パラメータ操作部 205は、第 1の実施形態に示した画像変換方法に従 つて、照明方程式パラメータ IINLEPを操作して画像拡大や画像圧縮を行い、新パ ラメータ値 IOUTLEPを生成する。この動作は、図 1のフローにおけるステップ S20お よび S30に相当する。ノ メータ操作部 205が、同質領域特定部およびパラメータ変 換部に対応している。  On the other hand, an image conversion instruction is passed from the image conversion instruction unit 203 of the client 302 to the parameter operation instruction unit 204 of the server 301 as an image conversion instruction signal ICIS. The norometer operation instruction unit 204 replaces the content of the image conversion by the image conversion instruction signal ICIS with the operation content of the illumination parameter, and outputs it to the parameter operation unit 205 as the parameter operation instruction signal LEPS. In accordance with the image conversion method described in the first embodiment, the parameter operation unit 205 operates the illumination equation parameter IINLEP to perform image enlargement or image compression, and generates a new parameter value IOUTLEP. This operation corresponds to steps S20 and S30 in the flow of FIG. The meter operation unit 205 corresponds to the homogeneous region specifying unit and the parameter converting unit.
[0097] このような動作によって、サーバー 301はクライアント 302からの画像変換指示に従 つた新パラメータ値 IOUTLEPを、ネットワーク 206を介してクライアント 302に提供で きる。新パラメータ値 IOUTLEPを受け取ったクライアント 302では、輝度算出部とし ての画像生成部 207が拡大画像を生成し、ディスプレイ 202に供給する。この動作は 、図 1のフローにおけるステップ S40に相当する。  With this operation, the server 301 can provide the client 302 with the new parameter value IOUTLEP according to the image conversion instruction from the client 302 via the network 206. In the client 302 that has received the new parameter value IOUTLEP, an image generation unit 207 as a luminance calculation unit generates an enlarged image and supplies it to the display 202. This operation corresponds to step S40 in the flow of FIG.
[0098] なお、本発明は図 15の構成に拘束されるものではなぐカメラ 201の解像度がディ スプレイ 202の解像度よりも高い場合は、パラメータ操作部 205が第 1の実施形態で 示したように画像縮小を行えばよい。また、パラメータ操作部 205が第 2の実施形態 に従って符号ィ匕装置として動作し、画像生成部 207が復号ィ匕装置として動作すれば 、圧縮データをネットワーク 206に配信できる。 Note that the present invention is not limited to the configuration of FIG. 15, and when the resolution of the camera 201 is higher than the resolution of the display 202, the parameter operation unit 205 is as shown in the first embodiment. Image reduction may be performed. Further, if the parameter operation unit 205 operates as an encoding device according to the second embodiment, and the image generation unit 207 operates as a decoding device. , Compressed data can be distributed to the network 206.
[0099] なお、画像機器の組み合わせや、各手段のシステム上の位置 (サーバー 301に属 するか、クライアント 302に属する力 またはそれ以外に属するかなど)は、任意であ る。また、カメラ 201としては、カメラ付携帯電話やデジタルスチルカメラ、ビデオムー ビーカメラなど任意のタイプの撮像装置が適用できる。さらに、予め録画した映像を 再生する再生装置においても、本発明を実現することができる。  [0099] It should be noted that the combination of image devices and the position of each means on the system (whether belonging to the server 301, the force belonging to the client 302 or the like, etc.) are arbitrary. As the camera 201, any type of imaging device such as a mobile phone with camera, a digital still camera, or a video movie camera can be applied. Furthermore, the present invention can also be realized in a playback apparatus that plays back pre-recorded video.
[0100] (第 3の構成例)  [0100] (Third configuration example)
図 16は第 3の構成例を示す図であり、カメラでの撮影において本発明に係る画像 変換を行う構成の一例である。  FIG. 16 is a diagram showing a third configuration example, which is an example of a configuration for performing image conversion according to the present invention in photographing with a camera.
[0101] カメラ 401は広角レンズ 402を備え、たとえば画角 180度の広視野を一度に撮影で きる。広角レンズ 402を上方に向けて取り付けることで光源 403を撮影できる。広角レ ンズ 402の光軸を z軸とし、カメラ 401の内部にある広角用撮像素子 404の水平方向 を X軸、広角用撮像素子 404の鉛直方向を y軸とした xyz3次元座標系を、広角レン ズ 402の焦点位置を座標原点として定め、光源ベクトル Lを求める。  [0101] The camera 401 includes a wide-angle lens 402, and can, for example, capture a wide field of view with an angle of view of 180 degrees at a time. The light source 403 can be photographed by attaching the wide-angle lens 402 facing upward. A wide-angle xyz three-dimensional coordinate system with the optical axis of the wide-angle lens 402 as the z-axis, the horizontal direction of the wide-angle image sensor 404 inside the camera 401 as the X-axis, and the vertical direction of the wide-angle image sensor 404 as the y-axis The focal position of lens 402 is determined as the coordinate origin, and the light source vector L is obtained.
[0102] 図 17 (a)は、光源 403の位置と広角レンズ 402で撮影された広角画像 405との関 係を示す。光源 403の位置が曲線 LTに沿って移動した場合で考える。曲線 LT上の 位置 PS1から位置 PS5に移動した光源 403は、広角画像 405の直線 ST上の位置 P XIから位置 PX5に記録される。光源 403が位置 PS2にある場合、直線 STと x軸が成 す角度を 0、直線 STと光源ベクトル L2が成す角度を φとして、光源ベクトル L2を求 める方法を説明する。  FIG. 17A shows the relationship between the position of the light source 403 and the wide-angle image 405 taken by the wide-angle lens 402. Consider the case where the position of the light source 403 moves along the curve LT. The light source 403 moved from the position PS1 on the curve LT to the position PS5 is recorded from the position PXI on the straight line ST of the wide-angle image 405 to the position PX5. When the light source 403 is located at the position PS2, a method for obtaining the light source vector L2 will be described in which the angle formed by the straight line ST and the x axis is 0, and the angle formed by the straight line ST and the light source vector L2 is φ.
[0103] 図 17 (b)は、図 17 (a)の広角画像 405を z軸方向から見たもので、位置 PX1と座標 原点 Oの距離を d、位置 PX2と座標原点 Oの距離を rとする。位置 PX1は φ =0、座 標原点 Οは φ = 90度であり、この間の光源位置と広角画像上の位置は線形に配分 されるため、位置 ΡΧ2の角度 φは以下で与えられる。  [0103] Figure 17 (b) is a view of the wide-angle image 405 of Figure 17 (a) from the z-axis direction. The distance between position PX1 and coordinate origin O is d, and the distance between position PX2 and coordinate origin O is r. And The position PX1 is φ = 0 and the coordinate origin Ο is φ = 90 degrees. Since the light source position and the position on the wide-angle image are linearly distributed, the angle φ of the position ΡΧ2 is given by
[数 10]  [Equation 10]
2d 2d
[0104] ここで、広角画像上の位置 ΡΧ1、位置 ΡΧ2、座標原点 Οの画素位置をそれぞれ、( x , y )、(x , y )、(χ , y )とすると、位置 PX1と座標原点 Oの距離 dは、 [0104] Here, the pixel positions at position ΡΧ1, position ΡΧ2, and coordinate origin Ο on the wide-angle image are ( x, y), (x, y), (χ, y), the distance d between the position PX1 and the coordinate origin O is
LI LI L2 L2 O O  LI LI L2 L2 O O
[数 11] + ( Ίι - y0 )2 で与えられ、位置 PX2と座標原点 Oの距離 rは、 [Equation 11] + (Ίι-y 0 ) 2 and the distance r between the position PX2 and the coordinate origin O is
[数 12] r = ^xL2 - x0) + [yL2 - y0 f で与えられる。 [Equation 12] r = ^ x L2 -x 0 ) + [y L2 -y 0 f
図 17 (c)は、位置 PX2から z軸方向に光源ベクトル L2との交線 LTを引いて得られ る三角形を示し、交線 LTの長さを z とすると、次式が得られる。  Figure 17 (c) shows a triangle obtained by subtracting the intersection line LT with the light source vector L2 from the position PX2 in the z-axis direction. If the length of the intersection line LT is z, the following equation is obtained.
L2  L2
[数 13] rn  [Equation 13] rn
r tan  r tan
2 光源ベクトル L2を長さ 1の単位ベクトルで定義するならば、  2 If the light source vector L2 is defined by a unit vector of length 1,
[数 14][Equation 14]
となる。 It becomes.
被写体の撮影は、被写体撮影レンズ 406と被写体用撮像素子 407で行い、被写体 用撮影素子 407の出力である第 1の画像を画像変換部 408が第 2画像に変換する。 画像変換部 408は、たとえば図 1のフローチャートに従った画像拡大や図 12に従つ た画像圧縮などを実行する。画像変換に用いる座標系に制限はないが、画像変換 は被写体用撮影素子 407の出力に施すため、被写体用撮影素子 407の xyz3次元 座標系を用いることが好ましい。そこで、広角用撮像素子 404の xyz3次元座標系で 表された光源ベクトルである (数 14)を被写体用撮影素子 407の xyz3次元座標系へ 変換する。座標系の変換は、座標軸の変換で実現できる。 ク卜ル (X , y , z  The subject is photographed by the subject photographing lens 406 and the subject imaging element 407, and the first image output from the subject photographing element 407 is converted into the second image by the image conversion unit 408. The image conversion unit 408 executes, for example, image enlargement according to the flowchart in FIG. 1, image compression according to FIG. Although there is no limitation on the coordinate system used for image conversion, it is preferable to use the xyz three-dimensional coordinate system of the subject imaging element 407 because the image conversion is performed on the output of the subject imaging element 407. Therefore, the light source vector (Expression 14) expressed in the xyz three-dimensional coordinate system of the wide-angle imaging element 404 is converted into the xyz three-dimensional coordinate system of the subject imaging element 407. The transformation of the coordinate system can be realized by transformation of the coordinate axes. Kool (X, y, z
light, ught light ) . は、広角用撮像素子 404の xyz3次元座標系の x軸を被写体用撮影素子 407 の xyz3次元座標系で表わしたベクトルとする :クトル(X , y , z ) は light, ught light Is a vector in which the x-axis of the xyz three-dimensional coordinate system of the wide-angle imaging device 404 is represented by the xyz three-dimensional coordinate system of the subject imaging device 407: Kutor (X, y, z) is
light, light, x light, x light 、広角用撮像素子 404の xyz3次元座標系の X軸を広角用撮影素子 404の xyz3次 元座標系で表わしたベクトルとする。 X軸と同様に y軸、 z軸も定義すると、各軸のベタ トルは 3 X 3行列 Mで以下のように関係付けられる。  The X axis of the xyz three-dimensional coordinate system of the image sensor 404 for wide angle, light, light, x light, x light is a vector expressed in the xyz three-dimensional coordinate system of the image sensor 404 for wide angle. If the y-axis and z-axis are defined in the same way as the X-axis, the vector of each axis is related to the 3 X 3 matrix M as follows.
[数 15] 、  [Equation 15],
ノ •- M  NO-- M
1 - z これを行列 Mにつ!/、て解くと 1 - z This is connected to the matrix M! /
[数 16] 、  [Equation 16]
M y y y  M y y y
ノ ゆ となる。この行列 Mで (数 14)に施すことによって、光源ベクトル Lは広角用撮像素子 404の xyz3次元座標系カゝら被写体用撮影素子 407の xyz3次元座標系へ変換され る。  It will be no. By applying this matrix M to (Equation 14), the light source vector L is converted from the xyz three-dimensional coordinate system of the wide-angle imaging element 404 to the xyz three-dimensional coordinate system of the subject imaging element 407.
[0107] なお、光源は多くの場合、カメラ 401の上方に位置するため、たとえば画角 180度 の広角レンズ 402を利用すれば光源 403を撮影することができる力 仮に画角が不 十分で光源 403を広角レンズ 402の画角に捉え切れない場合は、カメラ 401の向き を変えて光源 403を画角に捉える。そこで、カメラ 401の向きの変化を計測する必要 があるため、カメラ 401に 3次元姿勢センサ 409 (加速度センサなどで構成)を内蔵し 、広角用撮影素子 404の xyz3次元座標軸の 3次元動きを 3次元姿勢センサ 409から 取得し、(数 16)と同じ要領で座標変換すれば 、 、。  [0107] Since the light source is often located above the camera 401, for example, if the wide-angle lens 402 having an angle of view of 180 degrees is used, the light source 403 can be photographed. If 403 cannot be captured by the angle of view of the wide-angle lens 402, the direction of the camera 401 is changed and the light source 403 is captured by the angle of view. Therefore, since it is necessary to measure the change in the orientation of the camera 401, the camera 401 has a built-in 3D attitude sensor 409 (consisting of an acceleration sensor, etc.) to measure the 3D motion of the xyz 3D coordinate axis of the wide-angle imaging element 404. If it is acquired from the dimensional attitude sensor 409 and coordinate-transformed in the same way as (Equation 16),
[0108] カメラの向きを変える別の方法として、折り畳み式携帯電話の構成も有効である。す なわち、図 18に示すように、携帯電話 601は相手側カメラ 602 (携帯電話 601の使用 者の目の前の被写体を撮影するカメラ)と自分側カメラ 603 (携帯電話 601の使用者 を撮影するカメラ)を備え、相手側カメラ 602は折り畳んだディスプレイ部 604を開く 際に大きく向きを変える。すなわち、(a)に示すように、ディスプレイ部 604の開き角度 DAGが小さい場合は携帯電話 601の上方を捉え、(c)に示すように、ディスプレイ部 604の開き角度 DAGが大きい場合は携帯電話 601の使用者の前方を捉え、 (b)に 示すように、ディスプレイ部 604の開き角度 DAGが中間の大きい場合は、携帯電話 601の上方と携帯電話 601の使用者の前方の中間方向を捉える。そこで、ディスプ レイ部 604の開き角度 DAGをヒンジ 605に備えた角度センサ 606で検出して、相手 側カメラ 602の向きを算出する。 xyz3次元座標系は、たとえば、自分側カメラ 603の 焦点位置を座標原点とし、携帯電話 601の構造上決まる相手側カメラ 602の焦点位 置との関係から、(数 16)に従って、 2つのカメラの撮影画像を同じ xyz3次元座標系 で管理できる。なお、自分側カメラ 602も光源の撮影するために利用できることは明ら かである。以上により、図 3に示した照明方程式のパラメータのうち、光源ベクトルしが 算出できる。 [0108] As another method of changing the direction of the camera, a configuration of a foldable mobile phone is also effective. In other words, as shown in FIG. 18, the mobile phone 601 includes a far-end camera 602 (a camera that captures a subject in front of the user of the mobile phone 601) and a local camera 603 (a user of the mobile phone 601). The other camera 602 opens the folded display 604 Change the direction greatly. That is, as shown in (a), when the opening angle DAG of the display unit 604 is small, the top of the mobile phone 601 is captured. As shown in (c), when the opening angle DAG of the display unit 604 is large, the mobile phone As shown in (b), when the opening angle DAG of the display unit 604 is large in the middle, as shown in (b), the middle direction between the top of the mobile phone 601 and the front of the user of the mobile phone 601 is caught. . Therefore, the opening angle DAG of the display unit 604 is detected by the angle sensor 606 provided in the hinge 605, and the direction of the counterpart camera 602 is calculated. The xyz three-dimensional coordinate system, for example, uses the focal position of its own camera 603 as the coordinate origin, and the relationship between the focal position of the other camera 602 determined by the structure of the mobile phone 601 and The captured images can be managed in the same xyz 3D coordinate system. It is obvious that the own camera 602 can also be used for photographing the light source. As described above, among the parameters of the illumination equation shown in FIG. 3, the light source vector can be calculated.
[0109] また、カメラ 401は、偏光フィルタを備えることによって、たとえば (数 4)や図 5で説 明した方法によって、被写体撮影レンズ 408へ入射する物体からの反射光を拡散反 射成分と鏡面反射成分に分離できる。拡散反射成分を用いれば、(数 9)で説明した フォトメトリックステレオ法によって表面法線ベクトル Nが算出できる。フォトメトリックス テレオ法は (数 8)で説明したように、光源の向きが異なる 3枚以上の画像が必要であ る。そこで、光源 403が移動可能であれば、光源 403の位置を 3種類以上設定して 都度、撮影を行うことで (数 8)を得ることができる。また、被写体が移動することで光源 と被写体の位置関係が変化し、結果的に光源の向きが変化する。そこで、被写体の 特定点を追跡し、 3回以上撮影することで (数 8)を得ることもできる。一方、鏡面反射 成分は(数 1)の ks p sに相当し、光源ベクトル Lと表面法線ベクトル Nが既知となると 、(数 2)に含まれる未知パラメータは、鏡面反射成分比 ks、フレネル係数 Fえ、マイク 口ファセット分布 m、屈折率 nの 4つになる。これらのパラメータは複数のサンプルデ 一タカ 最小二乗法によるフィッティングで求める方法や屈折率計などの計測器で求 める方法などが利用できる。  [0109] Further, the camera 401 includes a polarizing filter, so that the reflected light from the object incident on the subject photographing lens 408 is diffused and reflected by the method described in (Equation 4) and Fig. 5, for example. It can be separated into reflection components. If the diffuse reflection component is used, the surface normal vector N can be calculated by the photometric stereo method described in (Equation 9). As described in (Equation 8), the photometric stereo method requires three or more images with different light source orientations. Therefore, if the light source 403 is movable, (Equation 8) can be obtained by setting three or more positions of the light source 403 and performing photographing each time. Also, as the subject moves, the positional relationship between the light source and the subject changes, and as a result, the direction of the light source changes. Therefore, it is possible to obtain (Equation 8) by tracking a specific point of the subject and shooting three or more times. On the other hand, the specular reflection component corresponds to ks ps in (Equation 1), and when the light source vector L and the surface normal vector N are known, the unknown parameters included in (Equation 2) include the specular reflection component ratio ks and the Fresnel coefficient. F, Mic mouth facet distribution m and refractive index n. These parameters can be obtained by a method using a plurality of sample data fittings using the least square method or a method using a measuring instrument such as a refractometer.
[0110] なお、図 16の構成に加えて、レンジファインダを別途用いれば、表面法線ベクトル Nが計測できることは明らかである。 [0111] 以上のように本発明は、広く普及しているパーソナルコンピュータや、サーバークラ イアントシステム、または、カメラ付携帯電話やデジタルスチルカメラ、ビデオムービー カメラ、テレビなどビデオ機器全般で実行可能であり、特別な機器、運用、管理など は必要ない。なお、専用ハードウェアへの実装やソフトウェアとハードウェアの組み合 わせなど、機器接続形態や機器内部の構成を拘束するものではな 、。 [0110] It is obvious that the surface normal vector N can be measured by using a range finder in addition to the configuration of FIG. [0111] As described above, the present invention can be executed in a wide variety of video devices such as widely used personal computers, server client systems, mobile phones with cameras, digital still cameras, video movie cameras, and televisions. No special equipment, operation or management is required. It should be noted that the device connection form and the internal configuration of the device, such as mounting on dedicated hardware or a combination of software and hardware, are not constrained.
産業上の利用可能性  Industrial applicability
[0112] 本発明では、エッジの先鋭さやテクスチャ感を保存したまま、画質の安定した画像 変換を実現できるので、例えば、スポーツや観光、記念撮影など目の前のシーンを 映像として記録する映像エンタテイメント分野にぉ 、て利用することができる。また、 文化芸術の分野では、被写体や撮影場所に制限されない自由度の高いデジタルァ 一力イブシステムを提供するために利用することができる。  [0112] In the present invention, it is possible to realize image conversion with stable image quality while preserving the sharpness and texture of edges, so that, for example, video entertainment for recording scenes in front of eyes such as sports, sightseeing, and commemorative photography. Can be used in various fields. In the field of culture and art, it can be used to provide a highly flexible digital eve system that is not restricted by the subject or shooting location.

Claims

請求の範囲 The scope of the claims
[1] 第 1の画像に対して所定の画像変換を行い、第 2の画像を生成する方法であって、 前記第 1の画像の各画素について、輝度を与える所定の照明方程式を構成する複 数のパラメータを、それぞれ、取得する第 1のステップと、  [1] A method of performing a predetermined image conversion on a first image to generate a second image, wherein a plurality of pixels that form a predetermined illumination equation that gives luminance is provided for each pixel of the first image. A first step of obtaining a number parameter, respectively,
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する第 2のステップと、  A second step of identifying, for each of the parameters, a homogeneous region that also has a pixel power with similar parameter values;
前記各パラメータ毎に、前記所定の画像変換の内容に応じて、前記第 2のステップ にお 、て特定した同質領域毎に、当該パラメータの変換処理を行う第 3のステップと 前記第 3のステップにおける変換処理後の各パラメータを用いて、前記第 2の画像 の各画素の輝度を求める第 4のステップとを備えた  For each of the parameters, a third step for performing the parameter conversion process for each homogeneous region specified in the second step according to the content of the predetermined image conversion, and the third step. And a fourth step of calculating the luminance of each pixel of the second image using each parameter after the conversion processing in
ことを特徴とする画像変換方法。  An image conversion method characterized by that.
[2] 請求項 1において、 [2] In claim 1,
前記所定の画像変換は、画像拡大であり、  The predetermined image conversion is image enlargement;
前記第 3のステップにおける変換処理は、当該パラメータを高密化する処理である ことを特徴とする画像変換方法。  The image conversion method according to claim 3, wherein the conversion process in the third step is a process for increasing the density of the parameter.
[3] 請求項 1において、 [3] In claim 1,
前記所定の画像変換は、画像縮小であり、  The predetermined image conversion is image reduction;
前記第 3のステップにおける変換処理は、当該パラメータを低密化する処理である ことを特徴とする画像変換方法。  The image conversion method according to claim 3, wherein the conversion process in the third step is a process for reducing the density of the parameter.
[4] 請求項 1において、 [4] In claim 1,
前記第 1のステップにおける前記複数のパラメータの取得は、被写体力 の計測、 または、前記第 1の画像からの推定によって、行う  The acquisition of the plurality of parameters in the first step is performed by measuring subject force or estimating from the first image.
ことを特徴とする画像変換方法。  An image conversion method characterized by that.
[5] 請求項 1において、 [5] In claim 1,
前記第 2のステップにおいて、複数画素における当該パラメータの値の分散を用い て、類似度合の評価を行う  In the second step, the degree of similarity is evaluated using the variance of the parameter values in a plurality of pixels.
ことを特徴とする画像変換方法。 An image conversion method characterized by that.
[6] 請求項 1において、 [6] In claim 1,
前記第 2のステップは、特定した同質領域内のノイズ除去を行う処理を含む ことを特徴とする画像変換方法。  The image conversion method according to claim 2, wherein the second step includes a process of removing noise in the identified homogeneous region.
[7] テクスチャ画像を、 3次元 CGモデルのオブジェクトに貼り付ける前処理ステップと、 前記オブジェクトに貼り付けられた前記テクスチャ画像の各画素について、輝度を 与える所定の照明方程式を構成する複数のパラメータを、それぞれ、取得する第 1の ステップと、 [7] A pre-processing step of pasting a texture image onto an object of a three-dimensional CG model, and a plurality of parameters constituting a predetermined illumination equation that gives brightness for each pixel of the texture image pasted on the object , Respectively, the first step to get,
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する第 2のステップと、  A second step of identifying, for each of the parameters, a homogeneous region that also has a pixel power with similar parameter values;
前記各パラメータ毎に、所定の画像変換の内容に応じて、前記第 2のステップにお いて特定した同質領域毎に、当該パラメータの変換処理を行う第 3のステップと、 前記第 3のステップにおける変換処理後の各パラメータを用いて、前記オブジェクト の画像の各画素の輝度を求める第 4のステップとを備えた  For each of the parameters, according to the content of the predetermined image conversion, a third step of performing the parameter conversion process for each homogeneous region specified in the second step, and a step of the third step And a fourth step of determining the luminance of each pixel of the image of the object using each parameter after the conversion process.
ことを特徴とするテクスチャマッピング方法。  A texture mapping method characterized by that.
[8] 第 1の画像に対して所定の画像変換を行い、第 2の画像を生成する装置であって、 前記第 1の画像の各画素について、輝度を与える所定の照明方程式を構成する複 数のパラメータを、それぞれ、取得するパラメータ取得部と、 [8] An apparatus for performing a predetermined image conversion on the first image to generate a second image, and for each pixel of the first image, a compound that forms a predetermined illumination equation that gives luminance. A parameter acquisition unit for acquiring a number of parameters, and
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する同質領域特定部と、  For each of the parameters, a homogeneous region identifying unit that identifies a homogeneous region that also has a pixel force with similar parameter values;
前記各パラメータ毎に、前記所定の画像変換の内容に応じて、前記同質領域特定 部によって特定された同質領域毎に、当該パラメータの変換処理を行うパラメータ変 換部と、  For each parameter, a parameter conversion unit that performs conversion processing of the parameter for each homogeneous region specified by the homogeneous region specification unit according to the content of the predetermined image conversion;
前記パラメータ変換部による変換処理後の各パラメータを用いて、前記第 2の画像 の各画素の輝度を求める輝度算出部とを備えた  A luminance calculation unit that obtains the luminance of each pixel of the second image using each parameter after the conversion processing by the parameter conversion unit.
ことを特徴とする画像変換装置。  An image conversion apparatus characterized by that.
[9] テクスチャ画像を、 3次元 CGモデルのオブジェクトに貼り付ける前処理部と、 [9] A pre-processing unit that pastes texture images onto 3D CG model objects,
前記オブジェクトに貼り付けられた前記テクスチャ画像の各画素について、輝度を 与える所定の照明方程式を構成する複数のパラメータを、それぞれ、取得するパラメ ータ取得部と、 For each pixel of the texture image pasted on the object, a plurality of parameters constituting a predetermined illumination equation giving brightness are respectively acquired. Data acquisition unit;
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する同質領域特定部と、  For each of the parameters, a homogeneous region identifying unit that identifies a homogeneous region that also has a pixel force with similar parameter values;
前記各パラメータ毎に、所定の画像変換の内容に応じて、前記同質領域特定部に よって特定された同質領域毎に、当該パラメータの変換処理を行うパラメータ変換部 と、  A parameter conversion unit that performs a conversion process of the parameter for each homogeneous region identified by the homogeneous region identification unit according to the content of predetermined image conversion for each parameter;
前記パラメータ変換部による変換処理後の各パラメータを用いて、前記オブジェクト の画像の各画素の輝度を求める輝度算出部とを備えた  A luminance calculation unit that calculates the luminance of each pixel of the image of the object using each parameter after the conversion processing by the parameter conversion unit.
ことを特徴とするテクスチャマッピング装置。  A texture mapping apparatus characterized by that.
[10] 画像変換を行うサーバークライアントシステムであって、  [10] A server client system for image conversion,
請求項 8のパラメータ取得部、同質領域特定部、およびパラメータ変換部を有する 廿^ ~ノ ^ ~と  The parameter acquisition unit according to claim 8, a homogeneous region identification unit, and a parameter conversion unit
請求項 8の輝度算出部を有するクライアントとを備え、  A client having the luminance calculation unit of claim 8,
前記クライアントは、前記サーバーに、画像変更の内容を指示する  The client instructs the server to change the image
ことを特徴とするサーバークライアントシステム。  Server client system characterized by that.
[11] 第 1の画像に対して所定の画像変換を行い、第 2の画像を生成する方法をコンビュ ータに実行させるプログラムであって、 [11] A program for causing a computer to execute a method of performing predetermined image conversion on a first image and generating a second image,
前記第 1の画像の各画素について、輝度を与える所定の照明方程式を構成する複 数のパラメータを、それぞれ、取得する第 1のステップと、  A first step of acquiring, for each pixel of the first image, a plurality of parameters constituting a predetermined illumination equation that gives brightness;
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する第 2のステップと、  A second step of identifying, for each of the parameters, a homogeneous region that also has a pixel power with similar parameter values;
前記各パラメータ毎に、前記所定の画像変換の内容に応じて、前記第 2のステップ にお 、て特定した同質領域毎に、当該パラメータの変換処理を行う第 3のステップと 前記第 3のステップにおける変換処理後の各パラメータを用いて、前記第 2の画像 の各画素の輝度を求める第 4のステップとを  For each of the parameters, a third step for performing the parameter conversion process for each homogeneous region specified in the second step according to the content of the predetermined image conversion, and the third step. And a fourth step of determining the luminance of each pixel of the second image using each parameter after the conversion processing in
コンピュータに実行させる画像変換プログラム。  An image conversion program to be executed by a computer.
[12] テクスチャ画像を、 3次元 CGモデルのオブジェクトに貼り付ける前処理ステップと、 前記オブジェクトに貼り付けられた前記テクスチャ画像の各画素について、輝度を 与える所定の照明方程式を構成する複数のパラメータを、それぞれ、取得する第 1の ステップと、 [12] A pre-processing step to paste the texture image onto the object of the 3D CG model, A first step of obtaining, for each pixel of the texture image pasted on the object, a plurality of parameters constituting a predetermined illumination equation that gives brightness;
前記各パラメータ毎に、当該パラメータの値が類似している画素力もなる同質領域 を特定する第 2のステップと、  A second step of identifying, for each of the parameters, a homogeneous region that also has a pixel power with similar parameter values;
前記各パラメータ毎に、所定の画像変換の内容に応じて、前記第 2のステップにお いて特定した同質領域毎に、当該パラメータの変換処理を行う第 3のステップと、 前記第 3のステップにおける変換処理後の各パラメータを用いて、前記オブジェクト の画像の各画素の輝度を求める第 4のステップとを  For each of the parameters, according to the content of the predetermined image conversion, a third step of performing the parameter conversion process for each homogeneous region specified in the second step, and a step of the third step A fourth step of determining the luminance of each pixel of the image of the object using each parameter after the conversion process;
コンピュータに実行させるテクスチャマッピングプログラム。 A texture mapping program to be executed by a computer.
PCT/JP2005/021687 2004-12-07 2005-11-25 Image conversion method, device, and program, texture mapping method, device, and program, and server-client system WO2006061999A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2006547855A JP3967367B2 (en) 2004-12-07 2005-11-25 Image conversion method, apparatus and program, texture mapping method, apparatus and program, and server client system
US11/369,975 US7486837B2 (en) 2004-12-07 2006-03-07 Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004354274 2004-12-07
JP2004-354274 2004-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/369,975 Continuation US7486837B2 (en) 2004-12-07 2006-03-07 Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system

Publications (1)

Publication Number Publication Date
WO2006061999A1 true WO2006061999A1 (en) 2006-06-15

Family

ID=36577831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/021687 WO2006061999A1 (en) 2004-12-07 2005-11-25 Image conversion method, device, and program, texture mapping method, device, and program, and server-client system

Country Status (4)

Country Link
US (1) US7486837B2 (en)
JP (1) JP3967367B2 (en)
CN (1) CN100573579C (en)
WO (1) WO2006061999A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1768387B1 (en) * 2005-09-22 2014-11-05 Samsung Electronics Co., Ltd. Image capturing apparatus with image compensation and method therefor
WO2007139067A1 (en) * 2006-05-29 2007-12-06 Panasonic Corporation Image high-resolution upgrading device, image high-resolution upgrading method, image high-resolution upgrading program and image high-resolution upgrading system
US8953684B2 (en) * 2007-05-16 2015-02-10 Microsoft Corporation Multiview coding with geometry-based disparity prediction
GB2458927B (en) * 2008-04-02 2012-11-14 Eykona Technologies Ltd 3D Imaging system
US8463072B2 (en) * 2008-08-29 2013-06-11 Adobe Systems Incorporated Determining characteristics of multiple light sources in a digital image
JP5106432B2 (en) * 2009-01-23 2012-12-26 株式会社東芝 Image processing apparatus, method, and program
TW201035910A (en) * 2009-03-18 2010-10-01 Novatek Microelectronics Corp Method and apparatus for reducing spatial noise of images
JP5273389B2 (en) * 2009-09-08 2013-08-28 株式会社リコー Image processing apparatus, image processing method, program, and recording medium
CN102472620B (en) * 2010-06-17 2016-03-02 松下电器产业株式会社 Image processing apparatus and image processing method
US8274656B2 (en) * 2010-06-30 2012-09-25 Luminex Corporation Apparatus, system, and method for increasing measurement accuracy in a particle imaging device
JP5742427B2 (en) * 2011-04-25 2015-07-01 富士ゼロックス株式会社 Image processing device
US11379968B2 (en) * 2017-12-08 2022-07-05 Panasonic Intellectual Property Management Co., Ltd. Inspection system, inspection method, program, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0326181A (en) * 1989-06-23 1991-02-04 Sony Corp System for converting image
JPH0737105A (en) * 1993-07-19 1995-02-07 Hitachi Ltd Plotting method for outline and ridgeline
JPH0944654A (en) * 1995-07-26 1997-02-14 Sony Corp Image processing device and method therefor, and noise eliminating device and method therefor
JP2000057378A (en) * 1998-06-02 2000-02-25 Sony Corp Image processor, image processing method, medium, and device and method for extracting contour
JP2000137833A (en) * 1998-10-29 2000-05-16 Mitsubishi Materials Corp Device and method for track generation and recording medium thereof
JP2003216973A (en) * 2002-01-21 2003-07-31 Canon Inc Method, program, device and system for processing three- dimensional image

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5872864A (en) * 1992-09-25 1999-02-16 Olympus Optical Co., Ltd. Image processing apparatus for performing adaptive data processing in accordance with kind of image
US5704024A (en) * 1995-07-20 1997-12-30 Silicon Graphics, Inc. Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
JPH1169372A (en) * 1997-08-14 1999-03-09 Fuji Photo Film Co Ltd Image lightness control method, digital camera used for the same and image processor
EP1008957A1 (en) 1998-06-02 2000-06-14 Sony Corporation Image processing device and image processing method
JP3921015B2 (en) * 1999-09-24 2007-05-30 富士通株式会社 Image analysis apparatus and method, and program recording medium
US20020169805A1 (en) * 2001-03-15 2002-11-14 Imation Corp. Web page color accuracy with image supervision
US6753875B2 (en) * 2001-08-03 2004-06-22 Hewlett-Packard Development Company, L.P. System and method for rendering a texture map utilizing an illumination modulation value
JP4197858B2 (en) * 2001-08-27 2008-12-17 富士通株式会社 Image processing program
US7034820B2 (en) * 2001-12-03 2006-04-25 Canon Kabushiki Kaisha Method, apparatus and program for processing a three-dimensional image
JP2003274427A (en) * 2002-03-15 2003-09-26 Canon Inc Image processing apparatus, image processing system, image processing method, storage medium, and program
JP2005149390A (en) 2003-11-19 2005-06-09 Fuji Photo Film Co Ltd Image processing method and device
CN1910623B (en) * 2005-01-19 2011-04-20 松下电器产业株式会社 Image conversion method, texture mapping method, image conversion device, server-client system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0326181A (en) * 1989-06-23 1991-02-04 Sony Corp System for converting image
JPH0737105A (en) * 1993-07-19 1995-02-07 Hitachi Ltd Plotting method for outline and ridgeline
JPH0944654A (en) * 1995-07-26 1997-02-14 Sony Corp Image processing device and method therefor, and noise eliminating device and method therefor
JP2000057378A (en) * 1998-06-02 2000-02-25 Sony Corp Image processor, image processing method, medium, and device and method for extracting contour
JP2000137833A (en) * 1998-10-29 2000-05-16 Mitsubishi Materials Corp Device and method for track generation and recording medium thereof
JP2003216973A (en) * 2002-01-21 2003-07-31 Canon Inc Method, program, device and system for processing three- dimensional image

Also Published As

Publication number Publication date
CN100573579C (en) 2009-12-23
US7486837B2 (en) 2009-02-03
US20060176520A1 (en) 2006-08-10
CN101040295A (en) 2007-09-19
JPWO2006061999A1 (en) 2008-06-05
JP3967367B2 (en) 2007-08-29

Similar Documents

Publication Publication Date Title
JP3967367B2 (en) Image conversion method, apparatus and program, texture mapping method, apparatus and program, and server client system
JP3996630B2 (en) Image conversion method, texture mapping method, image conversion apparatus, server client system, image conversion program, shadow recognition method, and shadow recognition apparatus
US8131116B2 (en) Image processing device, image processing method and image processing program
US10887519B2 (en) Method, system and apparatus for stabilising frames of a captured video sequence
JP4435867B2 (en) Image processing apparatus, method, computer program, and viewpoint conversion image generation apparatus for generating normal line information
US7688363B2 (en) Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US20050219642A1 (en) Imaging system, image data stream creation apparatus, image generation apparatus, image data stream generation apparatus, and image data stream generation system
EP2120007A1 (en) Image processing system, method, device and image format
EP2061005A2 (en) Device and method for estimating depth map, and method for generating intermediate image and method for encoding multi-view video using the same
US20180174326A1 (en) Method, System and Apparatus for Determining Alignment Data
US7348990B2 (en) Multi-dimensional texture drawing apparatus, compressing apparatus, drawing system, drawing method, and drawing program
JPWO2006033257A1 (en) Image conversion method, image conversion apparatus, server client system, portable device, and program
WO2007108041A1 (en) Video image converting method, video image converting device, server client system, mobile apparatus, and program
Dumont et al. A Prototype for Practical Eye-Gaze Corrected Video Chat on Graphics Hardware.
KR102146839B1 (en) System and method for building real-time virtual reality
Farin et al. Enabling arbitrary rotational camera motion using multisprites with minimum coding cost
JP2014039126A (en) Image processing device, image processing method, and program
Farin et al. Minimizing MPEG-4 sprite coding cost using multi-sprites
WO2019008233A1 (en) A method and apparatus for encoding media content
Georgiev et al. A general framework for depth compression and multi-sensor fusion in asymmetric view-plus-depth 3D representation
Li et al. Image panoramic mosaicing with global and local registration
Lee Low complexity mosaicking and up-sampling techniques for high resolution video display

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 11369975

Country of ref document: US

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 11369975

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2006547855

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200580034775.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05809130

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 5809130

Country of ref document: EP