WO2005067294A1 - 画像処理方法、画像処理装置および画像処理プログラム - Google Patents
画像処理方法、画像処理装置および画像処理プログラム Download PDFInfo
- Publication number
- WO2005067294A1 WO2005067294A1 PCT/JP2004/019374 JP2004019374W WO2005067294A1 WO 2005067294 A1 WO2005067294 A1 WO 2005067294A1 JP 2004019374 W JP2004019374 W JP 2004019374W WO 2005067294 A1 WO2005067294 A1 WO 2005067294A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image feature
- resolution
- amount
- texture
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 69
- 238000003672 processing method Methods 0.000 title claims description 21
- 238000004458 analytical method Methods 0.000 claims abstract description 31
- 230000004044 response Effects 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 20
- 238000000280 densification Methods 0.000 claims description 17
- 239000000463 material Substances 0.000 claims description 14
- 230000002123 temporal effect Effects 0.000 claims description 6
- 239000013598 vector Substances 0.000 description 74
- 238000000034 method Methods 0.000 description 44
- 238000010586 diagram Methods 0.000 description 28
- 238000009826 distribution Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 239000000047 product Substances 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 230000002542 deteriorative effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010017 direct printing Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000000123 paper Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
Definitions
- Image processing method image processing apparatus, and image processing program
- the present invention relates to image processing such as image measurement'recognition and image generation, and more particularly to a technique for generating, from an original image, an information-densified image whose information amount exceeds that of the original image.
- spatial domain super-resolution because the resolution is based on the number of pixels and lines.
- time domain super-resolution that requires a higher time resolution than at the time of sampling is required. For example, interlaced scanning When displaying the video recorded in the above on a progressive scan display, time-domain super-resolution processing with a magnification of 2 times is required. Such a process is frequently used, for example, when converting analog broadcast material to digital broadcast.
- Such super-resolution processing is regarded as an interpolation problem that generates new data.
- the basic idea of the interpolation method is that existing data forces near new data also infer themselves.
- its own signal value is estimated from the signal values of adjacent pixels in the horizontal, vertical, and oblique directions. If it is time-domain super-resolution, it will infer its own data from the previous and next data.
- there are general methods such as the nearest neighbor method, the linear method, and the bicubic method (Non-Patent Document 1).
- a proposal has been made to compensate for the quality degradation due to the blurring of these interpolation methods by supplementing high frequency components (Patent Document 1).
- Patent Document 2 a method has been proposed in which a large number of low-resolution data are collected so as to include an overlapping area, connected at corresponding points, and super-resolution processing is realized.
- Non-patent document 1 Shinya Araya, "Mysterious 3D Computer Graphics", Kyoritsu Shuppan, September 25, 2003, p. 144-145
- Patent Document 1 JP-A-2002-116271 (FIG. 2)
- Patent Document 2 JP-A-8-226879 (FIG. 2)
- the pattern and gloss of the object surface are not considered at all, and the image after super-resolution is not considered. Does not include a mechanism to save the pattern and gloss of the original image. In other words, if the texture of the original image changes in quality due to the super-resolution processing, the texture of the object captured in the image may be different.
- Patent Document 2 has a problem in that it is necessary to perform shooting a plurality of times, which increases the number of work steps.
- the present invention analyzes image features (eg, density distribution, frequency distribution, contrast, etc.) of an original image, and analyzes the analyzed image features and image information amount (eg, the number of pixels, Information density densification image power whose number of gradations, number of color channels, etc.) is higher than that of the original image.
- image information amount is the resolution
- the texture of the input image generally term for attributes such as pattern and gloss
- the analyzed texture features and the spatial or temporal resolution are higher and the super resolution is higher.
- a super-resolution image in space domain or a super-resolution image in time domain is generated using the super-resolution texture feature amount obtained from the image.
- the present invention it is possible to generate an information-densified image that exceeds the image information amount of the original image and is stored without deteriorating the image characteristics of the original image.
- the image information amount is set to the resolution
- a spatial domain super-resolution image or a time domain super-resolution image can be generated while preserving the texture impression of the original image.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to a first embodiment of the present invention.
- FIG. 2 is an example of a configuration of an information density increasing unit in FIG.
- FIG. 3 is a conceptual diagram of a spatial domain super-resolution processing according to the first embodiment of the present invention.
- FIG. 4 is a block diagram showing a schematic configuration of an image processing device that performs a spatial domain super-resolution process according to the first embodiment of the present invention.
- FIG. 5 is a diagram showing an example of the configuration and processing of a texture analysis unit in FIG. 4.
- FIG. 6 is a diagram showing an example of the configuration and processing of a super-resolution processing unit in FIG. 4.
- FIG. 7 is a diagram illustrating an example of a configuration and processing of an image generation unit in FIG. 4.
- FIG. 8 is a block diagram showing a configuration of an image processing device according to a second embodiment of the present invention.
- FIG. 9 is an image processing apparatus for performing a spatial domain super-resolution processing according to the second embodiment of the present invention.
- FIG. 2 is a block diagram illustrating a schematic configuration of a processing device.
- FIG. 10 is a diagram showing an example of a method of calculating a basic texture feature amount weight coefficient in the configuration of FIG.
- FIG. 11 is a diagram showing a configuration of an image processing device according to a third embodiment of the present invention.
- FIG. 12 is a diagram showing an example of a method of creating a texture feature vector conversion table of FIG. 11;
- FIG. 13 is a diagram showing a first configuration example of the present invention.
- FIG. 14 is a diagram showing a second configuration example of the present invention.
- FIG. 15 is a diagram showing a third configuration example of the present invention.
- an image feature analysis of an original image is performed, a first step of obtaining an image feature amount independent of image coordinates, and the first step.
- the second step includes selecting an image feature category to which the image feature amount belongs from a plurality of image feature categories prepared in advance, and selecting the image feature category in the selected image feature category.
- the image processing method according to the first aspect in which a basic image feature with an increased amount of information is read out from the densified image feature database as the densified image feature, is provided.
- the second step calculates a similarity between each of the image feature amounts and a plurality of image feature categories prepared in advance, and calculates a basic degree in each of the image feature categories.
- the image processing method according to the first aspect wherein the high-density image feature amount is generated by weighting and adding the image feature amount according to the calculated similarity.
- the second step selects an image feature category to which the image feature amount belongs from among a plurality of image feature categories prepared in advance, and refers to a conversion table database.
- the image processing method according to the first aspect wherein the image feature amount is converted into the densified image feature amount using a conversion table for feature amount conversion in the selected image feature category.
- the image processing method according to the fourth aspect, wherein the plurality of image feature categories are provided for each material of an object captured in an image.
- the image processing method according to the first aspect, wherein a spatial resolution or a temporal resolution is used as the image feature amount.
- a spatial frequency response or a time frequency response is obtained by using a Fourier transform.
- a spatial frequency response or a time frequency response is determined using a wavelet transform.
- a spatial frequency response or a time frequency response is obtained using a plurality of spatial filters different in at least one of scale, phase, and spatial directionality.
- an image processing apparatus an image feature analysis unit that performs image feature analysis of an original image and obtains an image feature amount independent of image coordinates, and the image feature analysis unit
- the information amount densification unit that obtains a high-density image feature amount by densifying the information amount of the obtained image feature amount to obtain a high-density image feature amount
- an apparatus having an image generation unit that generates a densified image obtained by densifying an original image with an amount of information.
- an eleventh aspect of the present invention as an image processing program, a first step of performing image feature analysis of an original image to obtain an image feature amount independent of image coordinates, and And a second step of densifying the obtained image feature amount to obtain a high-density image feature amount, based on the high-density image feature amount obtained in the second step. And a third step of generating a densified image obtained by densifying the original image with an information amount.
- FIG. 1 is a block diagram showing the configuration of the image processing device according to the first embodiment of the present invention.
- the image processing apparatus shown in FIG. 1 generates an information-densified image that exceeds the information amount of an original image.
- density of information amount means a process of increasing the amount of image information of a given image, and in some cases, “super information processing” or “super information ”.
- the image information amount is, for example, the number of pixels, the number of gradations, the number of color channels, and the like. Taking the number of pixels as an example, an image of 320 pixels X 240 lines is enlarged four times in both the horizontal and vertical directions. When the information density is increased, the total number of pixels becomes 16 times 1280 pixels X A 960 line image is generated.
- a process of enlarging an input image in which each pixel has 128 gradations to 256 gradations corresponds to a double information density densification process.
- the process of converting a monochrome image (the number of color channels is one) to an RGB image is equivalent to triple the amount of information densification.
- the image feature analysis unit 10 analyzes the image features of the input image ⁇ as the original image and outputs the image features FI (first step).
- the image features are, for example, a density distribution, a frequency response, a contrast, and the like, and their feature amounts are respectively represented by a density histogram, a frequency spectrum, and a ratio between a highlight and a dark portion.
- the information amount densification unit 20 performs an information amount densification process according to the image feature amount FI, and outputs a densified image feature amount SFI (second step). Since the amount of information in the image feature FI is directly increased, the amount of information in the image can be increased without changing the image feature itself.
- the image generator 30 visualizes the densified image feature SFI and generates an output image IOUT as a densified image (third step).
- FIG. 2 shows an example of the configuration of the information amount densification section 20 of FIG.
- the image feature category selection unit 21 selects an image feature FI from the image feature categories that have been classified in advance, and assigns an image feature index ID indicating the type of the selected image feature category. Output.
- the image feature index ID is given to the densified image feature database 22 and the information feature corresponding to the image feature index ID and the densified image feature is output from the densified image feature database 22 as the densified image feature SFI. Is done.
- the image feature amount is expressed in a vector (for example, the frequency of the density histogram is set as a vector element), and similar vectors are grouped together using an arbitrary clustering method (for example, the K-mean method). Perform formation. It is effective to select a category by vector quantization.
- the densified image feature database 22 is created before the information densification processing is performed. A high-density sample image whose image information amount exceeds the input image is prepared in advance, and an image feature amount such as a density histogram is obtained.
- FIG. 3 is a diagram conceptually showing spatial domain super-resolution processing using spatial resolution as the amount of image information. It is assumed that the density distribution X is obtained as a result of observing the density distribution of the low-resolution image X along the line L. Here, for convenience of explanation, it is assumed that the number of pixels in the line L is eight.
- the density distribution here is schematically shown to explain a concept that does not accurately reflect the illustrated image data, and the same applies to the following description.
- the number of pixels in one line is 32 pixels, so that 32 density levels are required. It is necessary to supplement in form.
- a density distribution A for example, a method is conceivable in which the densities of the low-resolution image X are arranged at equal intervals for every four pixels, and the pixels between them are supplemented by linear interpolation.
- the increase / decrease pattern of the density change along the line L becomes a blurred image like the image A because the stored power gradient becomes smooth.
- the amount of image information has quadrupled, but the impression of texture, which is an image feature, has changed.
- the density distribution B is obtained by creating a high frequency component irrespective of the waveform shape of the density distribution X that is a low frequency component. Since the change in the density level is large and changes more rapidly than the density distribution A, a fine texture such as the image B is generated. However, the waveform impression is so far away from the density distribution A that the texture impression has changed.
- the density distribution C is a case where the density distribution A is stored as a low-frequency component, and a high-frequency component having a higher spatial frequency than the density distribution A is superimposed.
- the low-frequency component traces the basic pattern of the texture, and the high-frequency component adds a fine texture pattern. Bell can be supplemented.
- FIG. 4 is a block diagram showing a schematic configuration of an image processing apparatus that performs a spatial domain super-resolution process using a spatial resolution as an image information amount.
- the texture which is the image feature of the input image ⁇ , is analyzed in pixel units by the texture analysis unit 40 as an image feature analysis unit, and the texture is analyzed. It is described as a task feature vector FVT.
- the super-resolution processing unit 50 as an information densification unit performs super-resolution processing in a texture feature space, and a test feature vector FVT as an image feature is converted to a super-resolution image as a densified image feature. Convert to resolution feature texture vector SFVT.
- the image generation unit 60 visualizes the super-resolution texture feature vector SFVT and outputs an output image IOUT.
- FIG. 5 is a diagram showing an example of the configuration and processing of the texture analysis unit 40 of FIG.
- a texture analysis unit 40 performs a texture analysis using a spatial frequency response.
- the input image ⁇ is distributed to a plurality of channels in a spatial frequency component decomposition section 41, and given to each spatial frequency band, to obtain a spatial frequency response FRS.
- the texture feature vector generator 42 generates a texture feature vector FVT using the spatial frequency response FRS as an element.
- the texture feature vector FVT has a direction and a magnitude in the texture feature space centered on the response channel of each spatial frequency band, and the texture is described by this attribute. If the elements of the feature are independent of each other, the expression of the feature is highly efficient without duplication. Therefore, Fourier transform, wavelet transform, and the like are effective for decomposing the spatial frequency component.
- the spatial frequency response may be obtained by using a plurality of spatial filters different in at least one of scale, phase, and spatial directivity.
- FIG. 6 is a diagram showing an example of the configuration and processing of the super-resolution processing section 50 of FIG. 6, the super-resolution texture feature database 51 stores, for each of a plurality of image feature categories, a texture feature vector generated from a sample image exceeding the resolution of the input image ⁇ ⁇ ⁇ ⁇ . Each texture feature vector is provided with an index 111M for specifying an image feature category.
- the texture category selection unit 52 compares the texture feature vector FVT describing the texture of the input image ⁇ with each texture feature vector stored in the super-resolution texture feature database 51.
- the texture category selection unit 52 specifically calculates the texture feature vector FVT
- the inner product of the vectors is obtained between the low-resolution components having a response, and this is used as the similarity.
- the index with the largest inner product (similarity is the highest) is selected as the texture feature index IDT, and the texture feature vector to which the texture feature index IDT is assigned is the super-resolution texture. It is output as the feature vector SFVT. Since the super-resolution texture feature vector SFVT has a response even in a frequency band exceeding the frequency w, the super-resolution processing is performed in the texture feature space.
- FIG. 6 shows the response amount in a dynamic range of 0 to 100.
- FIG. 7 is a diagram showing an example of the configuration and processing of the image generation unit 60 in FIG.
- the processing here is the reverse of the spatial frequency decomposition shown in FIG. That is, for each element of the super-resolution texture feature vector SFVT, the product is calculated with the basic function in each spatial frequency band, and the sum of all channels is set as the output image IOUT.
- the image generation unit 60 performs the inverse transformation.
- the texture of an input image is described as a spatial frequency spectrum and compared with a spatial frequency spectrum generated from a super-resolution sample image exceeding the resolution of the input image. And texture selection. For this reason, the effect that the texture of the image after the super-resolution processing matches the impression of the input image can be reliably obtained.
- the spatial domain super-resolution processing using the spatial resolution as the image feature has been described, but the time domain super-resolution processing using the time resolution as the image feature has also been described here.
- This can be performed in the same manner as the spatial domain super-resolution processing.
- the texture is generated from the difference of the video signal level with the time change. Therefore, the texture analyzer 40 in FIG. 4 is configured in the time domain, and performs time-frequency decomposition.
- the processing after expansion to the time domain is the same as that described in FIGS. 4 to 7, and the description is omitted here.
- FIG. 8 is a block diagram showing a configuration of an image processing device according to the second embodiment of the present invention.
- the image processing device shown in FIG. 8 also generates an information-densified image exceeding the information amount of the input image, similarly to the image processing device of FIG.
- the same components as those in FIG. The same reference numerals are given, and the detailed description is omitted here. Yes.
- the information amount densification unit 20 A includes a densified image feature amount database 25, a basic image feature amount weight coefficient calculation unit 26, and an image feature amount interpolation unit 27.
- the densified image feature amount database 25 stores, for each of a plurality of image feature categories, a basic image feature amount whose information amount has been increased, generated from a high-density sample image whose information amount exceeds the input image ⁇ .
- the basic image feature weighting coefficient calculation unit 26 calculates the similarity between the image feature FI obtained from the input image and each basic image feature stored in the densified image feature database 25. Each is calculated, and a basic image feature quantity weighting coefficient group GWC is obtained based on the similarity.
- the basic image feature weighting coefficient group GWC is provided to the image feature interpolator 27.
- the densified image feature amount database 25 supplies the stored basic image feature amount group GSFI to the image feature amount interpolation unit 27.
- the image feature interpolator 27 linearly weights and adds the basic image feature group GSFI using the basic image feature weight coefficient group GWC, and outputs the result as the densified image feature SFI.
- the densification of the information amount is executed by linear interpolation in the image feature amount space. For this reason, the image features of the input image ⁇ are preserved in the image IOUT in which the amount of information is increased.
- the basic image feature weight coefficients since a plurality of basic image features are interpolated using the basic image feature weight coefficients, it is possible to more accurately generate an image feature with a high information density.
- FIG. 9 is a block diagram showing a schematic configuration of an image processing device that performs a spatial domain super-resolution process using a spatial resolution as an image information amount.
- the same reference numerals as in FIG. 4 denote the same components as in FIG. 4, and a detailed description thereof will be omitted.
- the super-resolution processing unit 50A as the information amount densification unit includes a super-resolution texture feature amount database 55, a basic texture feature amount weight coefficient calculation unit 56, and a texture feature amount interpolation unit 57.
- the super-resolution texture feature database 55 uses a plurality of super-resolution basic texture feature vectors in which a super-resolution sample image power whose resolution exceeds the input image ⁇ is also generated as a basic image feature. It is stored for each image feature category.
- the basic texture feature weighting coefficient calculation unit 56 stores the texture feature vector FVT obtained from the input image and the super-resolution texture feature database 55. The similarity with the basic texture feature vector is calculated, and the basic texture feature weighting coefficient group GWCT is calculated based on the similarity.
- the basic texture feature weight coefficient group GWCT is given to the texture feature interpolator 57.
- the super-resolution texture feature database 55 supplies the stored basic texture feature vector group GSFVT to the texture feature interpolator 57.
- the texture feature interpolator 57 linearly weights and adds the basic texture feature vector group GSFVT using the basic texture feature weighting coefficient group GWCT, and outputs the result as a super-resolution texture feature vector SFVT.
- the super-resolution processing is executed by linear interpolation in the texture feature amount space. For this reason, the texture I of the input image ⁇ is also stored in the super-resolution image IOUT.
- the texture I of the input image ⁇ is also stored in the super-resolution image IOUT.
- a plurality of basic texture features are interpolated using the basic texture feature weighting coefficients, it is possible to more precisely generate an image feature with a high information density.
- FIG. 10 is a diagram illustrating an example of a method for calculating the basic texture feature amount weighting coefficient.
- the basic texture feature weight coefficient calculation unit 56 is configured to generate a basic texture feature vector group GSFVT included in the super-resolution texture feature database 55 for the texture feature vector FVT describing the texture of the input image ⁇ . Is calculated.
- the texture feature vector FVT does not have a significant response to the high-frequency component (response above an arbitrarily given threshold), and the DC component force shows a response up to the intermediate frequency component (frequency w in this example). Therefore, the basic texture feature quantity weighting coefficient calculation unit 56 specifically calculates the inner product of the low-resolution components of which the texture feature quantity vector FVT has a response, and uses this as the similarity.
- the basic texture feature vector group GS FVT also has a response in the frequency band exceeding the frequency w, the super-resolution processing has been performed in the texture feature space.
- the response amount is shown in a dynamic range of 0-100.
- the texture of the input image is described as a spatial frequency spectrum, and the basic texture generated from the super-resolution sample image exceeding the resolution of the input image is described.
- the super-resolution texture feature vector is calculated by linearly interpolating the task feature vector using the weight coefficient obtained from the similarity. For this reason, the effect that the texture of the image after the super-resolution processing matches the input image can be reliably obtained.
- the spatial domain super-resolution processing using the spatial resolution as the image feature has been described here
- the time-domain super-resolution processing using the time resolution as the image feature has also been described here. This can be performed in the same manner as the spatial domain super-resolution processing.
- the texture is generated from the difference of the video signal level with the time change. Therefore, the texture analysis unit 40 in FIG. 9 is configured in the time domain, and performs time-frequency decomposition. The processing after expansion to the time domain is the same as that described in FIG. 9, and the description is omitted here.
- FIG. 11 is a diagram showing a configuration of an image processing device according to the third embodiment of the present invention.
- the image processing apparatus shown in FIG. 11 generates a super-resolution image IOUT exceeding the resolution of the input image ⁇ , and the same components as in FIG. 4 are denoted by the same reference numerals as in FIG.
- the texture feature amount database 71 and the texture feature amount vector conversion table 72 constitute an information amount densification unit.
- the texture which is the image feature of the input image ⁇ , is analyzed by the texture analysis unit 40 in pixel units, and is described as a texture feature vector FVT.
- the internal operation of the texture analysis unit 40 is the same as that shown in FIG. 5, and generates a texture feature vector FVT from the spatial frequency response FRS.
- the texture feature database 71 is created in advance from (i Xj) sample images, i types of resolutions and j types of materials.
- the (iXj) sample images are converted into a texture feature vector by the texture analysis unit 40, and the histogram is registered in the texture feature database 71 for each sample image. That is, the texture feature vector is determined by the texture analysis unit 40 for each pixel of the sample image, and the frequency of the texture feature vector is determined for all pixels.
- a plurality of image feature categories M-1-M_j are defined for each type of material taken in the image.
- a condition for super-resolution imaging is that at least one of i types of sample images having different resolutions exceeds the resolution of the input image ⁇ .
- the material is, for example, wood grain , Paper, stone, sand, etc., which may be defined by their physical properties or by human perception. Even the same grain can be expressed in terms of coarse grain, smooth grain, bright grain, etc., and there are a wide variety of expressions related to the type of material. The present invention accepts any definition that does not place a limitation on this expression.
- the similarity vector is put together using a clustering method (for example, the K mean method) to form a histogram. This can reduce the amount of data without altering the texture features.
- the texture feature amount database 71 prepared in this way is compared with a feature amount histogram obtained from the texture feature amount vector FVT of all pixels of the input image I IN (comparison of histogram similarities).
- the method is optional).
- the feature amount histogram HI of the material M-2 and the resolution R-2 is selected as having the highest similarity V to the feature amount histogram of the input image ⁇ .
- To perform super-resolution without changing the texture impression, which is the image feature use the same material (in this case, material M-2) in the image feature category! / If you select a feature histogram that exceeds the resolution of the input image ⁇ , ⁇ .
- the feature amount histogram H2 of the resolution Ri is selected.
- spatial information is used at the time of execution and at the time of learning, that is, at the time of creating the texture feature amount database 71. Applicable even if) does not match.
- the texture feature vector conversion table 72 is used to super-resolution the texture feature of the input image #.
- the texture feature vector conversion table 72 is provided as a pair with the texture feature database 71, and stores (iXj) conversion tables consisting of i kinds of resolutions and j kinds of materials.
- the feature amount histogram HI of “Material M—2, resolution R—2” is selected as having the highest similarity to the texture feature vector FVT of the input image ⁇ , this is referred to as “Material M— 2.
- resolution R ⁇ i refer to “(M ⁇ 2 ⁇ R ⁇ 2) ⁇ (M_2 ⁇ R_i)” conversion table TB.
- the output of the texture feature vector conversion table 72 is the super resolution texture feature vector SFVT, which is visualized by the image generation unit 60, and As a result, an output image IOUT is obtained.
- FIG. 12 is a diagram showing an example of a method for creating the texture feature vector conversion table 72.
- a low-resolution image is created step by step using “low-pass filter + sub-sampling”.
- the resolution R-i image having the highest resolution is passed through a low-pass filter 81, and the resolution is reduced by subsampling 82 to obtain a resolution R-i-1 image.
- the image with the resolution Ri-1 is passed through the low-pass filter 83, and the resolution is reduced by the sub-sampling 84 to obtain the image with the resolution Ri-2.
- a texture analysis is performed on each image to obtain a texture feature vector.
- the type of the texture feature vector is represented by a label number.
- a label image A having a label number for each pixel is obtained.
- the label image B is obtained with the resolution R_i-1 image strength
- the label image C is obtained with the resolution R_i-2 image. Note that, for the label images A, B, and C, it is not necessary to show all the pixels in the description here, so some of them are schematically shown.
- a texture feature vector conversion table for converting a resolution R-i-1 image into a resolution R-i image is constructed from the correspondence between label numbers of label images B and A, for example, as follows. .
- the label number “5” of the label image B exists in two pixels and the label numbers “3”, “5”, “7”, “8” of the label image A 4 types of texture feature vectors.
- the frequencies are 1, 2, 4, and 1, respectively. Therefore, the texture feature amount vector with the label number “7” having the maximum frequency is set as the super-resolution texture feature amount vector.
- a texture feature vector conversion table can be created by such a simple selection process.
- labels correspond one-to-one
- one texture feature vector is converted into one super-resolution texture feature vector. Therefore, in the example of FIG. 12, one pixel of the label image B corresponds to four pixels of the label image A. , The same texture feature vector is assigned to the four pixels of the label image A. However, in order to further enhance the effect of super-resolution, it is desirable to assign a super-resolution texture feature vector to each of the four pixels.
- a method of applying a super-resolution texture feature vector to each pixel according to the label frequency can be considered. That is, of the eight pixels of the label image A corresponding to the label “5” of the label image B, one pixel is assigned the label “3”, two pixels are assigned the label “5”, and four pixels are assigned the label “7”. Assign, assign label “8” to one pixel.
- the texture pattern is spatially the same at the time of creating the texture feature amount database 71 and at the time of executing the super-resolution of the input image ⁇ . It is not always appropriate to use. Therefore, in the case where the super-resolution texture feature vector is assigned to each pixel according to the label frequency, it is preferable to perform random assignment by means such as random number generation. The selection of pixels is determined randomly, but the selected super-resolution texture feature vector and its frequency are determined according to the correspondence between the labels.
- the texture feature vector conversion table for converting the resolution R—i—2 image into the resolution R—i image is constructed by combining the label numbers of the label images C and A.
- the combination and frequency of the label number “11” of the label image C and the label of the label image A are as shown in the label correspondence example 2>. Since the two frequencies with the highest frequencies are the label numbers “7” and “9”, for example, the average of the two texture feature vectors of the labels “7” and “9” is calculated by super-resolution It should be a vector.
- the method described above in creating a conversion table for converting a resolution Ri-1 image into a resolution Ri image may be used.
- each unit of the image processing apparatus according to the present invention or each step of the image processing method according to the present invention may be realized using dedicated hardware, or a computer. Even if it is realized in software by this program, it does not work.
- FIG. 13 is a diagram showing a first configuration example, and is an example of a configuration for performing image processing according to the present invention using a personal computer.
- the resolution of the camera 101 is lower than the resolution of the display 102.
- a super-resolution image is created by the image processing program according to the present invention loaded in the main memory 103. I do.
- the low-resolution image captured by the camera 101 is recorded in the image memory 104.
- a super-resolution texture feature amount database 105a is prepared in advance so that the image processing program power of the main memory 103 can be referred to.
- the image processing program in the main memory 103 reads the low-resolution image in the image memory 104 via the memory bus 106, converts the low-resolution image into a high-resolution image in accordance with the resolution of the display 102, and then returns to the video memory via the memory bus 106. Transfer to 107.
- the high-resolution image transferred to the video memory 107 can be viewed on the display 102.
- the operation of the image processing program, the contents of the database, the method of creation, and the like are the same as those described in the above embodiment, but are not described here.
- the present invention can take various configurations other than those limited by the configuration of FIG.
- the external storage device connected to another personal computer may be acquired via the network 108 for the super-resolution texture feature amount database 105a.
- the low-resolution image is not powerful when acquired via the network 108.
- FIG. 14 is a diagram showing a second configuration example, and is an example of a configuration for performing image processing according to the present invention using a server client system.
- the resolution of the camera 111 is lower than the resolution of the display 112.
- super-resolution processing is executed in a server-client system.
- the server 113 includes a texture analysis unit 114 and a super-resolution processing unit 115, and calculates a texture feature FT of the input image ⁇ .
- the image is super-resolved and transmitted to the client 117 via the network 116 as the super-resolution texture feature SFT.
- the client 117 visualizes the received super-resolution texture feature amount SFT by the image generation circuit 118 and displays the obtained super-resolution image on the display 112.
- the contents of the texture analysis, the super-resolution processing, the super-resolution texture feature amount database, the creation method, and the like are any of those described in the above-described embodiment, and the description is omitted here.
- camera 111 may not be a part of client 117.
- FIG. 15 is a diagram showing a third configuration example, and is an example of a configuration for performing image processing according to the present invention using a camera-equipped mobile phone and a television.
- the camera-equipped mobile phone 121 can send image data to the television 124 via the network 122 or the memory card 123.
- the resolution of the camera-equipped mobile phone 121 is lower than that of the TV 124.
- the texture feature analysis circuit implemented in the internal circuit of the TV and the super-resolution tester feature A super-resolution image is created by the quantity database and the image generation circuit and displayed on the screen.
- the details of the texture feature analysis, the super-resolution texture feature database, and the image generation are any of those described in the above-described embodiment, and a description thereof will be omitted.
- the camera-equipped mobile phone 121 may be a digital still camera or a video movie camera.
- the present invention can be implemented in a widespread personal computer, a server client type system, or a video device such as a camera-equipped mobile phone, a digital still camera, a video movie camera, and a television. Yes, no special equipment, operation, management, etc. is required. In addition, it does not restrict the system construction method, the device connection form, the internal configuration of the device, etc., such as implementation on dedicated hardware or the combination of software and hardware.
- the present invention produces an image with a larger amount of information without deteriorating image characteristics, and thus can be used in various application fields in which visual information is regarded as important.
- visual information is regarded as important.
- digital archives can accurately present details of exhibits to viewers, enhance the possibilities of image expression in video production, and broadcast various images. This has the effect of ensuring compatibility in the format.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005516842A JP4069136B2 (ja) | 2004-01-09 | 2004-12-24 | 画像処理方法、画像処理装置、サーバクライアントシステム、サーバ装置、クライアント装置および画像処理システム |
US11/063,389 US7203381B2 (en) | 2004-01-09 | 2005-02-23 | Image processing method, image processing apparatus, and image processing program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-003853 | 2004-01-09 | ||
JP2004003853 | 2004-01-09 | ||
JP2004100727 | 2004-03-30 | ||
JP2004-100727 | 2004-03-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/063,389 Continuation US7203381B2 (en) | 2004-01-09 | 2005-02-23 | Image processing method, image processing apparatus, and image processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005067294A1 true WO2005067294A1 (ja) | 2005-07-21 |
Family
ID=34752096
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/019374 WO2005067294A1 (ja) | 2004-01-09 | 2004-12-24 | 画像処理方法、画像処理装置および画像処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US7203381B2 (ja) |
JP (1) | JP4069136B2 (ja) |
CN (1) | CN100493174C (ja) |
WO (1) | WO2005067294A1 (ja) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007280284A (ja) * | 2006-04-11 | 2007-10-25 | Toshiba Corp | 画像の高解像度化方法及び装置 |
JP2008252701A (ja) * | 2007-03-30 | 2008-10-16 | Toshiba Corp | 映像信号処理装置、映像表示装置および映像信号処理方法 |
US7965339B2 (en) | 2006-04-11 | 2011-06-21 | Kabushiki Kaisha Toshiba | Resolution enhancing method and apparatus of video |
US8131116B2 (en) | 2006-08-31 | 2012-03-06 | Panasonic Corporation | Image processing device, image processing method and image processing program |
JP2012506647A (ja) * | 2008-09-30 | 2012-03-15 | サムスン エレクトロニクス カンパニー リミテッド | 高解像度映像獲得装置およびその方法 |
US8156128B2 (en) | 2005-09-14 | 2012-04-10 | Jumptap, Inc. | Contextual mobile content placement on a mobile communication facility |
JP2014082773A (ja) * | 2008-04-22 | 2014-05-08 | Sony Corp | 画像処理作業をオフロードする携帯型画像化装置 |
JP2015515179A (ja) * | 2012-03-05 | 2015-05-21 | トムソン ライセンシングThomson Licensing | 超解像度処理のための方法、システム及び装置 |
JP2015132930A (ja) * | 2014-01-10 | 2015-07-23 | 日本放送協会 | 空間超解像・階調補間装置及びプログラム |
JP2017207765A (ja) * | 2008-12-19 | 2017-11-24 | 株式会社半導体エネルギー研究所 | 液晶表示装置の駆動方法 |
JP2018166847A (ja) * | 2017-03-30 | 2018-11-01 | コニカミノルタ株式会社 | 画像処理装置及び放射線画像撮影システム |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101685535B (zh) * | 2004-06-09 | 2011-09-28 | 松下电器产业株式会社 | 图象处理方法 |
CN100521743C (zh) * | 2004-11-30 | 2009-07-29 | 松下电器产业株式会社 | 图像处理方法、图像处理装置 |
US8606383B2 (en) * | 2005-01-31 | 2013-12-10 | The Invention Science Fund I, Llc | Audio sharing |
US9910341B2 (en) | 2005-01-31 | 2018-03-06 | The Invention Science Fund I, Llc | Shared image device designation |
US9489717B2 (en) | 2005-01-31 | 2016-11-08 | Invention Science Fund I, Llc | Shared image device |
US20060174203A1 (en) | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Viewfinder for shared image device |
US8902320B2 (en) | 2005-01-31 | 2014-12-02 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US7920169B2 (en) | 2005-01-31 | 2011-04-05 | Invention Science Fund I, Llc | Proximity of shared image devices |
US20060170956A1 (en) | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image devices |
US9124729B2 (en) | 2005-01-31 | 2015-09-01 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US7876357B2 (en) | 2005-01-31 | 2011-01-25 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US9819490B2 (en) | 2005-05-04 | 2017-11-14 | Invention Science Fund I, Llc | Regional proximity for shared image device(s) |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US8081682B1 (en) * | 2005-10-13 | 2011-12-20 | Maxim Integrated Products, Inc. | Video encoding mode decisions according to content categories |
US8126283B1 (en) | 2005-10-13 | 2012-02-28 | Maxim Integrated Products, Inc. | Video encoding statistics extraction using non-exclusive content categories |
US8149909B1 (en) | 2005-10-13 | 2012-04-03 | Maxim Integrated Products, Inc. | Video encoding control using non-exclusive content categories |
KR100819027B1 (ko) * | 2006-04-26 | 2008-04-02 | 한국전자통신연구원 | 얼굴 영상을 이용한 사용자 인증 방법 및 장치 |
US8139899B2 (en) * | 2007-10-24 | 2012-03-20 | Motorola Mobility, Inc. | Increasing resolution of video images |
JP2010074732A (ja) * | 2008-09-22 | 2010-04-02 | Canon Inc | 画像処理装置及び画像処理方法ならびに画像処理方法を実行するプログラム |
US20110211737A1 (en) * | 2010-03-01 | 2011-09-01 | Microsoft Corporation | Event Matching in Social Networks |
US9465993B2 (en) | 2010-03-01 | 2016-10-11 | Microsoft Technology Licensing, Llc | Ranking clusters based on facial image analysis |
US9536288B2 (en) * | 2013-03-15 | 2017-01-03 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency lifting |
US9305332B2 (en) | 2013-03-15 | 2016-04-05 | Samsung Electronics Company, Ltd. | Creating details in an image with frequency lifting |
US9349188B2 (en) | 2013-03-15 | 2016-05-24 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency strength controlled transform |
US9436890B2 (en) * | 2014-01-23 | 2016-09-06 | Samsung Electronics Co., Ltd. | Method of generating feature vector, generating histogram, and learning classifier for recognition of behavior |
KR102214922B1 (ko) * | 2014-01-23 | 2021-02-15 | 삼성전자주식회사 | 행동 인식을 위한 특징 벡터 생성 방법, 히스토그램 생성 방법, 및 분류기 학습 방법 |
US9471853B2 (en) * | 2014-05-19 | 2016-10-18 | Jinling Institute Of Technology | Method and apparatus for image processing |
US9652829B2 (en) | 2015-01-22 | 2017-05-16 | Samsung Electronics Co., Ltd. | Video super-resolution by fast video segmentation for boundary accuracy control |
US9824278B2 (en) * | 2015-06-24 | 2017-11-21 | Netflix, Inc. | Determining native resolutions of video sequences |
GB2553557B (en) * | 2016-09-08 | 2022-04-20 | V Nova Int Ltd | Data processing apparatuses, methods, computer programs and computer-readable media |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10112843A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH114413A (ja) * | 1997-06-12 | 1999-01-06 | Sony Corp | 画像変換装置、画像変換方法、演算装置、演算方法、および、伝送媒体 |
JP2000312294A (ja) * | 1999-02-24 | 2000-11-07 | Oki Data Corp | 画像処理装置 |
JP2003018398A (ja) * | 2001-04-20 | 2003-01-17 | Mitsubishi Electric Research Laboratories Inc | ピクセル画像から超解像度画像を生成する方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3511645B2 (ja) | 1993-08-30 | 2004-03-29 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
JPH08294001A (ja) | 1995-04-20 | 1996-11-05 | Seiko Epson Corp | 画像処理方法および画像処理装置 |
JP2828138B2 (ja) | 1996-08-28 | 1998-11-25 | 日本電気株式会社 | 画像合成方法及び画像合成装置 |
JP3743077B2 (ja) | 1996-10-31 | 2006-02-08 | ソニー株式会社 | 画像信号変換装置および方法 |
JP3867346B2 (ja) | 1997-06-17 | 2007-01-10 | ソニー株式会社 | 画像信号処理装置及び方法並びに予測パラメータの学習装置及び方法 |
US6760489B1 (en) * | 1998-04-06 | 2004-07-06 | Seiko Epson Corporation | Apparatus and method for image data interpolation and medium on which image data interpolation program is recorded |
JP2002262094A (ja) * | 2001-02-27 | 2002-09-13 | Konica Corp | 画像処理方法及び画像処理装置 |
US7085436B2 (en) * | 2001-08-28 | 2006-08-01 | Visioprime | Image enhancement and data loss recovery using wavelet transforms |
-
2004
- 2004-12-24 JP JP2005516842A patent/JP4069136B2/ja not_active Expired - Fee Related
- 2004-12-24 WO PCT/JP2004/019374 patent/WO2005067294A1/ja active Application Filing
- 2004-12-24 CN CNB2004800039190A patent/CN100493174C/zh not_active Expired - Fee Related
-
2005
- 2005-02-23 US US11/063,389 patent/US7203381B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10112843A (ja) * | 1996-10-04 | 1998-04-28 | Sony Corp | 画像処理装置および画像処理方法 |
JPH114413A (ja) * | 1997-06-12 | 1999-01-06 | Sony Corp | 画像変換装置、画像変換方法、演算装置、演算方法、および、伝送媒体 |
JP2000312294A (ja) * | 1999-02-24 | 2000-11-07 | Oki Data Corp | 画像処理装置 |
JP2003018398A (ja) * | 2001-04-20 | 2003-01-17 | Mitsubishi Electric Research Laboratories Inc | ピクセル画像から超解像度画像を生成する方法 |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8156128B2 (en) | 2005-09-14 | 2012-04-10 | Jumptap, Inc. | Contextual mobile content placement on a mobile communication facility |
US7965339B2 (en) | 2006-04-11 | 2011-06-21 | Kabushiki Kaisha Toshiba | Resolution enhancing method and apparatus of video |
JP2007280284A (ja) * | 2006-04-11 | 2007-10-25 | Toshiba Corp | 画像の高解像度化方法及び装置 |
US8131116B2 (en) | 2006-08-31 | 2012-03-06 | Panasonic Corporation | Image processing device, image processing method and image processing program |
JP2008252701A (ja) * | 2007-03-30 | 2008-10-16 | Toshiba Corp | 映像信号処理装置、映像表示装置および映像信号処理方法 |
JP2014082773A (ja) * | 2008-04-22 | 2014-05-08 | Sony Corp | 画像処理作業をオフロードする携帯型画像化装置 |
KR101498206B1 (ko) * | 2008-09-30 | 2015-03-06 | 삼성전자주식회사 | 고해상도 영상 획득 장치 및 그 방법 |
JP2012506647A (ja) * | 2008-09-30 | 2012-03-15 | サムスン エレクトロニクス カンパニー リミテッド | 高解像度映像獲得装置およびその方法 |
US10578920B2 (en) | 2008-12-19 | 2020-03-03 | Semiconductor Energy Laboratory Co., Ltd. | Method for driving liquid crystal display device |
US11899311B2 (en) | 2008-12-19 | 2024-02-13 | Semiconductor Energy Laboratory Co., Ltd. | Method for driving liquid crystal display device |
JP2017207765A (ja) * | 2008-12-19 | 2017-11-24 | 株式会社半導体エネルギー研究所 | 液晶表示装置の駆動方法 |
US10018872B2 (en) | 2008-12-19 | 2018-07-10 | Semiconductor Energy Laboratory Co., Ltd. | Method for driving liquid crystal display device |
US11300832B2 (en) | 2008-12-19 | 2022-04-12 | Semiconductor Energy Laboratory Co., Ltd. | Method for driving liquid crystal display device |
US10254586B2 (en) | 2008-12-19 | 2019-04-09 | Semiconductor Energy Laboratory Co., Ltd. | Method for driving liquid crystal display device |
JP2019117411A (ja) * | 2008-12-19 | 2019-07-18 | 株式会社半導体エネルギー研究所 | 表示装置の駆動方法 |
JP2015515179A (ja) * | 2012-03-05 | 2015-05-21 | トムソン ライセンシングThomson Licensing | 超解像度処理のための方法、システム及び装置 |
JP2015132930A (ja) * | 2014-01-10 | 2015-07-23 | 日本放送協会 | 空間超解像・階調補間装置及びプログラム |
JP2018166847A (ja) * | 2017-03-30 | 2018-11-01 | コニカミノルタ株式会社 | 画像処理装置及び放射線画像撮影システム |
Also Published As
Publication number | Publication date |
---|---|
CN100493174C (zh) | 2009-05-27 |
JPWO2005067294A1 (ja) | 2007-07-26 |
JP4069136B2 (ja) | 2008-04-02 |
CN1748416A (zh) | 2006-03-15 |
US20050152619A1 (en) | 2005-07-14 |
US7203381B2 (en) | 2007-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4069136B2 (ja) | 画像処理方法、画像処理装置、サーバクライアントシステム、サーバ装置、クライアント装置および画像処理システム | |
Battiato et al. | A locally adaptive zooming algorithm for digital images | |
US6009213A (en) | Image processing apparatus and method | |
JP2001054075A (ja) | 画像信号の動き補償走査変換回路 | |
JPH06343170A (ja) | ビデオ信号のノイズ低減装置、ならびに3次元ディスクリートコサイン変換およびノイズ測定を用いた方法 | |
US6181834B1 (en) | Hybrid image reduction method and apparatus with moir{acute over (e)} suppression | |
JP2000115716A (ja) | 映像信号の変換装置および変換方法、並びにそれを使用した画像表示装置およびテレビ受信機 | |
EP1565878A1 (en) | A unit for and method of image conversion | |
KR101098300B1 (ko) | 공간 신호 변환 | |
JP4173705B2 (ja) | 動画像合成方法および装置並びにプログラム | |
Wang et al. | A fast scheme for arbitrarily resizing of digital image in the compressed domain | |
CN107154020A (zh) | 一种基于Curvelet变换的影像融合方法及系统 | |
JP2522357B2 (ja) | 画像の拡大方式 | |
JPH07193789A (ja) | 画像情報変換装置 | |
JP3693187B2 (ja) | 信号変換装置及び信号変換方法 | |
JP4104937B2 (ja) | 動画像合成方法および装置並びにプログラム | |
JP2000092455A (ja) | 画像情報変換装置および画像情報変換方法 | |
JP2004152148A (ja) | 動画像合成方法および装置並びにプログラム | |
JP3871350B2 (ja) | 解像度補償可能な画像変換装置および方法 | |
Sicuranza et al. | Multidimensional processing of video signals | |
Nojiri et al. | Motion compensated standards converter for HDTV | |
JP4582993B2 (ja) | 動画像合成方法および装置並びにプログラム | |
JPH1098694A (ja) | 画像信号の走査変換方法及び回路 | |
JP4310847B2 (ja) | 画像情報変換装置および変換方法 | |
JP3814850B2 (ja) | 信号変換装置および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 11063389 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005516842 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048039190 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |