WO2011118224A1 - ぼかし画像取得装置及び方法 - Google Patents
ぼかし画像取得装置及び方法 Download PDFInfo
- Publication number
- WO2011118224A1 WO2011118224A1 PCT/JP2011/001755 JP2011001755W WO2011118224A1 WO 2011118224 A1 WO2011118224 A1 WO 2011118224A1 JP 2011001755 W JP2011001755 W JP 2011001755W WO 2011118224 A1 WO2011118224 A1 WO 2011118224A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sat
- cache
- image data
- row
- value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
Definitions
- the present invention relates to an apparatus and method for acquiring a blurred image in computer graphics.
- Japanese Patent Application Laid-Open No. 2006-72829 discloses a system that acquires a blurred image using an area summation table (summed-area tables).
- an area sum table (SAT) is obtained based on the input image.
- the value of SAT at the coordinates (x, y) of a certain pixel is set to sat (x, y).
- pixel data of the input pixel be i (x, y).
- sat (x, y) is a value obtained by the following Equation 1.
- sat (-1, y), sat (x, -1), and sat (-1, -1) are set to 0 (zero).
- Table 1 below shows an example of virtual image original data and area total table (SAT) composed of 4 ⁇ 4 pixels.
- FIG. 1 is a diagram for explaining the concept of the area summation table.
- a point means a pixel having coordinates (x, y). Sat (x, y) for obtaining the blurred image data of the point (x, y) indicated by the black circle in FIG. 1 can be expressed as the following Expression 3.
- Equation 4 The coordinates (X r , Y t ), (X r , Y b ), (X l , Y t ), and the point (x, y) existing at the center of (X l , Y b ) in FIG.
- the blurred image data is given by Equation 4 below.
- an arbitrary average image can be obtained by appropriately adjusting w and h (which are also expressed as a filter degree W).
- FIG. 2 shows an example of a blurred image using an area total table (SAT).
- SAT area total table
- SAT area total table
- an object of the present invention is to provide an apparatus and a method for acquiring a blurred image that can reduce a required storage capacity in computer graphics.
- the first aspect of the present invention relates to a system for obtaining blurred image data for computer graphics.
- the example of the system of the present invention can be realized by a computer equipped with a graphic processing chip and software.
- the computer has image processing hardware.
- the control unit receives processing from the main program stored in the main memory.
- the control unit appropriately reads data stored in the storage unit, and causes the calculation unit to perform calculation processing using the input data.
- the calculation process in this calculation unit includes a blurred image process described later.
- the result of the arithmetic processing is appropriately stored in the storage unit and is output to a monitor or the like.
- Each function and means of the present invention described below may be implemented by hardware such as a circuit, a chip, and a core, or may be realized by cooperation of an arithmetic processing circuit or the like and software. .
- the input image data at the coordinates (X, Y) is i (X, Y).
- the blurred image data at the coordinates (x, y) is defined as I Blur (x, y). Basically, the blurred image is obtained for all the pixels existing on the screen. As will be described later, the blurred image data is data obtained by blurring information necessary for computer graphics, such as color (red) and transparency.
- Sat (x, y) sat (X r , Y t ) ⁇ sat (X r , Y b ) ⁇ sat (X l , Y t ) + sat (X l , Y b ).
- Formula II It is. In the coordinate value of y, t means the top and b means the bottom.
- This system may be implemented only by hardware.
- This system may be implemented by hardware and software.
- this system has an input / output unit, a control unit, a calculation unit, and a storage unit. These elements are connected so that information can be exchanged via a bus or the like.
- a control program is stored in the storage unit.
- the control unit reads the control program stored in the storage unit, appropriately reads the information from the storage unit, and performs calculation processing in the calculation unit.
- the result of arithmetic processing is memorize
- the result of the arithmetic processing is output from the input / output unit.
- This system has input image data input means for receiving input image data i (X, Y).
- This system has sat calculation means for obtaining the value of sat shown by the formula I using the input image data i (X, Y) received by the input image data input means.
- the sat calculation means obtains the value of sat (X, 0) in order from sat (0, 0) using i (X, 0). In this case, this value can be obtained by reading and adding sat (X-1, 0) and i (X, 0). From the second line onwards, i (X, Y), sat (X-1, Y), sat (X, Y-1), and sat (X-1, Y-1) are read and added. Or a code conversion process (a process for changing + to-) to obtain a sat value.
- This system has a cache for storing a sat calculation result for each line obtained by the sat calculation means.
- the cache includes a first cache and a second cache.
- the first cache includes a front cache and a back cache.
- the second cache includes a front cache and a back cache. These caches are specified by flags, for example.
- This system from the cache, read the value of sat (X r, Y t) , sat (X r, Y b), sat (X l, Y t) and sat (X l, Y b) , Sat ( Sat calculation means for obtaining the value of x, y).
- the value of Sat (x, y) is as in Formula II. Therefore, the values of sat (X r , Y t ), sat (X r , Y b ), sat (X l , Y t ), and sat (X l , Y b ) are read out, and addition processing or code conversion processing (+ To obtain a Sat value.
- This system has a blurred image data acquisition unit that uses I Blur (x, y) by using the values of Sat (x, y), w, and h obtained by the Sat calculation unit.
- I Blur (x, y) is determined by Equation III. Therefore, the value of I Blur (x, y) can be obtained by reading out Sat (x, y), w and h and using a multiplier and a 1 / x circuit.
- This system can determine the value of I Blur (x, y) as follows.
- the sat calculation means obtains sat (X, 0) which is the sat value of the first row using the input image data i (X, 0) of the first row. Then, the front cache of the first cache stores sat (X, 0) that is the result of the sat calculation on the first line.
- the sat calculation means uses the input image data i (X, 1) on the second line and sat (X, 0) which is the sat calculation result on the first line stored in the front cache of the first cache. Then, sat (X, 1) which is the sat value of the second row is obtained. Then, the back cache of the first cache stores sat (X, 1) which is the sat calculation result of the second row.
- the sat calculation means uses the input image data i (X, 2) on the third line and sat (X, 1) which is the sat calculation result on the second line stored in the back cache of the first cache.
- sat (X, 3) which is the sat value in the third row is obtained.
- the front cache of the first cache updates the stored information using sat (X, 2) which is the result of sat calculation on the third line.
- the sat calculation means uses the input image data i (X, m ⁇ 1) of the m-th row and the sat calculation result of the (m ⁇ 1) -th row stored in the back cache of the first cache.
- a certain sat (X, m-2) a sat (X, m-1) which is the sat value of the m-th row is obtained.
- one of the caches updates the stored information using sat (X, m ⁇ 1) which is the result of sat calculation of the m-th row. This operation is repeated until m becomes Y t ⁇ 2.
- (Y t -1) is sat operation result of the row sat (X, Y t) to, among the front cache and the first cache back cache of the first cache, sat (X, Y t -
- the person who has not memorized 1) memorizes it.
- sat calculating means Y t-th row of the input image data i (X, Y t +1) and, using a sat read from the cache (X, Y t), sat calculation result of Y t th Sat (X, Y t +1) is obtained.
- the front cache of the second cache stores sat (X, Y t +1) that is the result of the sat operation on the Y t row.
- the sat calculation means uses the input image data i (X, Y t +2) on the Y t +1 line and sat (X, Y t +1) read from the front cache of the second cache, Sat (X, Y t +2) which is the sat calculation result of the Y t + 1th row is obtained.
- the back cache of the second cache stores sat (X, Y t +2) that is the sat calculation result of the Y t +1 line.
- I sat calculating means uses Y t +2 row of input image data i (X, Y t +3) and, sat read from back cache of the second cache (X, Y t +2) and, Sat (X, Y t +3) which is the sat calculation result of the Y t + 2nd row is obtained.
- the front cache of the second cache updates the record information by using sat (X, Y t +3) which is the sat calculation result of the Y t + 2nd row.
- sat (X, Y b ) which is the sat operation result of the (Y b ⁇ 1) -th line
- the one that does not store sat (X, Y b ⁇ 1) stores it.
- a preferred embodiment of the present invention is the system as described above, wherein i (x, y) is any value of R, G, or B indicating color.
- a preferred embodiment of the present invention is the system as described above, wherein i (x, y) is one of R, G or B indicating color, or an ⁇ (alpha) value indicating transparency. .
- a cache storing sat (X, Y t ⁇ 1) among the front cache of the first cache and the back cache of the first cache is transferred from the sat calculation process of the Y t row to the I Blur.
- the system described above is used as a cache for processing other than obtaining blurred image data until processing for obtaining (x, y).
- the second aspect of the present invention relates to a system for obtaining blurred image data for computer graphics having a coefficient determination unit 11 and a blurred image acquisition unit 12. Then, the blurred image acquisition unit 12 performs the same operation as that of the system described above.
- the coordinates of the pixel from which the blurred image data is obtained be (x, y),
- the input image data at the coordinates (x, y) is i (x, y),
- the blurred image data at the coordinates (x, y) is I Blur (x, y)
- From the coordinates (x, y) the points where the x coordinate is different from the predetermined value and the y coordinate is different from the predetermined value are (X r , Y t ), (X r , Y b ), (X l , Y t ) and (X l , Y b ) l (el)> r, b> t
- a value satisfying the following formula Ia is defined as sat (X, Y)
- a value satisfying the following formula II is defined as Sat (x, y)
- the blurred image acquisition unit 12 Input image data input means 20 for receiving the input image data i (X, Y) at the coordinates (X, Y) and receiving the blurring coefficient w (X, Y) from the coefficient determination unit 11;
- a sat calculation means 21 for obtaining a value of sat shown by the formula Ia using the input image data i (X, Y) received by the input image data input means;
- caches 22 and 23 for storing the sat calculation results for each line obtained by the sat calculation means; From the cache, sat (X r, Y t ), sat (X r, Y b), sat (X l, Y t) and sat (X l, Y b) reads a value of, Sat (x, y) Sat calculation means 24 for obtaining the value of And a blurred image data acquisition unit 25 using I Blur (x, y) using the value of Sat (x, y) obtained by the Sat calculation unit.
- the cache includes a first cache 22 and a second cache 23,
- the first cache includes a front cache 22a and a back cache 22b.
- the second cache includes a front cache 23a and a back cache 23b.
- the sat calculation means is Using the input image data i (X, 0) on the first row, sat (X, 0), which is the sat value on the first row,
- the front cache of the first cache is The sat (X, 0) that is the result of the sat calculation in the first row is stored.
- the sat calculation means is Using the input image data i (X, 1) on the second line and sat (X, 0) that is the sat calculation result of the first line stored in the front cache of the first cache, the sat value on the second line Find sat (X, 1)
- the back cache of the first cache is It stores sat (X, 1) which is the result of sat calculation in the second row.
- the sat calculation means is Using the input image data i (X, 2) on the third line and sat (X, 1) that is the sat calculation result of the second line stored in the back cache of the first cache, the sat value on the third line Find sat (X, 3), The front cache of the first cache updates the stored information using sat (X, 2) that is the result of the sat calculation on the third line.
- the sat (X, Y t ) that is the result of the sat operation on the (Y t ⁇ 1) -th line is determined as sat (X, Y t ⁇ 1) among the front cache of the first cache and the back cache of the first cache. Those who do not remember memorize.
- the sat calculation means is Y t th input image data i (X, Y t +1) and, read from cache sat (X, Y t) by using the a sat calculation result of Y t row sat (X, Y t +1)
- the front cache of the second cache is Sat (X, Y t +1) that is the result of sat calculation in the Y t row is stored.
- the sat calculation means is Y t +1 line of input image data i (X, Y t +2) and, sat read from front cache of the second cache (X, Y t +1) using a, Y t +1 row sat operation Find the result sat (X, Y t +2),
- the back cache of the second cache is Sat (X, Y t +2) that is the sat calculation result of the Y t + 1th row is stored.
- the sat calculation means is Y t +2 row of input image data i (X, Y t +3) and, sat read from back cache of the second cache (X, Y t +2) with a, Y t +2 row sat operation Obtain sat (X, Y t +3) as a result,
- the front cache of the second cache is The record information is updated using sat (X, Y t +3) which is the sat calculation result of the Y t + 2nd row.
- One of the front cache of the second cache and the back cache of the second cache stores the sat (X, Y b -1) that is the result of the sat calculation on the (Y b -2) line.
- sat (X, Y b ) which is the sat operation result of the (Y b ⁇ 1) -th line
- the one that does not store sat (X, Y b -1) stores it
- sat (X, Y) w (X, Y) ⁇ i (X, Y) + sat (X ⁇ 1, Y) + sat (X, Y ⁇ 1) ⁇ sat (X ⁇ 1, Y ⁇ 1)..
- Formula Ia when X-1 is -1, sat (X-1, Y) and sat (X-1, Y-1) are set to 0, and Y-1 is -1. Sets sat (X, Y-1) and sat (X-1, Y-1) to 0.
- the coefficient determining unit 11 An input means for inputting information on the depth z (X, Y) or the depth d (X, Y) of the input image data i (X, Y); Threshold input means for inputting the first threshold A and the second threshold B; Comparison operation means for comparing the depth z (X, Y) or the depth d (X, Y) with the first threshold value A and the second threshold value B; Coefficient determining means for determining the coefficient w (X, Y) according to the comparison result of the comparison operation means; The coefficient determining means sets w (X, Y) to 0 when the depth z (X, Y) or the depth d (X, Y) is greater than or equal to the first threshold A, When the depth z (X, Y) or the depth d (X, Y) is equal to or smaller than the second threshold value B, w (X, Y) is set to 1.
- w (X, Y) is set to a value between 0 and 1.
- the comparison calculation means compares the depth z (X, Y) or the depth d (X, Y) with the first threshold value A and the second threshold value B.
- the coefficient determining means determines the coefficient w (X, Y) according to the comparison result of the comparison operation means.
- the coefficient determination unit sets w (X, Y) to 0 when the depth z (X, Y) or the depth d (X, Y) is equal to or greater than the first threshold A.
- the coefficient determining means sets w (X, Y) to 1 when the depth z (X, Y) or the depth d (X, Y) is equal to or smaller than the second threshold value B.
- the coefficient determining means sets w (X, Y) from 0 to 1.
- Arithmetic processing is performed so as to obtain an intermediate value. For example, Az / AB or Ad / AB may be set to w (X, Y), or the value of w (X, Y) may be read from a lookup table.
- the coefficient determining unit 11 Object input means for inputting the input image data i (X, Y) and the identification number of the object in the computer bracket and for inputting information on the object to be blurred image processing; Using the information on the target object of the blurred image processing, the identification number of the target object, and the input image data i (X, Y), the target object of the blurred image processing is grasped and becomes the target of the blurred image.
- Coefficient determining means for setting a blur coefficient w (X, Y) relating to the object to 1; System.
- the input image data i (X, Y) and the identification number of the target in the computer bracket are input to the target input means, and information about the target that is the target of the blurred image processing is input.
- the coefficient determining means grasps the target object to be subjected to the blur image processing using the information on the target object to be subjected to the blur image processing, the identification number of the target object, and the input image data i (X, Y).
- a blur coefficient w (X, Y) regarding an object to be a blurred image target is set to 1.
- a preferred embodiment of this aspect is The coefficient determination unit 11
- Object input means for inputting the input image data i (X, Y) and the identification number of the object in the computer bracket and for inputting information on the object to be blurred image processing;
- Mask area acquisition means for obtaining a mask area for the pixel data portion of the object specified by the object input means;
- Mask area image data acquisition means for obtaining new image data of the mask area acquired by the mask area acquisition means;
- Coefficient determining means for determining the coefficient w (x, y) of the mask area as 1;
- Have The blurred image acquisition unit 12 Regarding the mask area determined by the coefficient determination unit 11, The coefficient w (x, y) 1 is received and new image data is received as input image data. Get a blurred image of the mask of the object, System.
- the object input means receives input image data i (X, Y) and an identification number of the object in the computer bracket, and information related to the object to be subjected to the blurred image processing.
- the mask area acquisition means obtains a mask area for the pixel data portion of the object specified by the object input means. This mask area is an area indicating the position of the object on the graphic.
- the image data acquisition means acquires new image data of the mask area acquired by the mask area acquisition means. For example, when it is desired to dimly illuminate an object, light color information may be read from a database or the like as mask color information.
- the coefficient determining means determines the coefficient w (x, y) of the mask area as 1.
- the present invention also provides a program for causing a computer to function as the above system.
- the present invention also provides a computer-readable information recording medium storing such a program.
- FIG. 2 shows an example of a blurred image using an area total table (SAT).
- FIG. 3 is a conceptual diagram for explaining the blurred image data.
- FIG. 4 is a block diagram of a system according to the second embodiment.
- FIG. 5 is a circuit diagram for implementing the blurred image generation system of the present invention.
- FIG. 6 is a circuit diagram for implementing the blurred image generation system of the present invention.
- FIG. 5 is a circuit diagram for implementing the blurred image generation system of the present invention.
- FIG. 7 is a block diagram showing a hardware configuration for obtaining the blurred image technique of the present invention.
- FIG. 8 is a block diagram for explaining the post-effect core.
- FIG. 9 is a diagram for explaining pixel positions of P t , P xy , and P b .
- FIG. 10 is a block diagram of a hardware implementation for realizing the blur calculation process.
- FIG. 11A shows an image obtained by extracting only a target near the viewpoint from a certain scene.
- FIG. 11B shows a blurred image using the mask of FIG.
- FIG. 11C is a diagram illustrating a smooth mask using the threshold A and the threshold B.
- FIG. 12 is a diagram showing an image synthesized using the mask of FIG. 11C without blurring the mask target.
- FIG. 11A shows an image obtained by extracting only a target near the viewpoint from a certain scene.
- FIG. 11B shows a blurred image using the mask of FIG.
- FIG. 11C is a diagram illustrating
- FIG. 13 is an original image before performing the glowing process in the fourth embodiment.
- FIG. 14 is a diagram illustrating an input image area having stencil values of a lantern object.
- FIG. 15 shows an image obtained in Example 4.
- FIG. 16 is for obtaining a samurai image.
- FIG. 17 shows a blurred mask.
- FIG. 18 is a diagram showing an image obtained in Example 5.
- FIG. 3 is a conceptual diagram for explaining the blurred image data. Each cell represents a pixel. The portion indicated by a triangle in the figure is a pixel for obtaining blurred image data. A portion indicated by a triangle in FIG. 3 corresponds to a point (x, y) in FIG. In FIG. 3, the points indicated by white circles (open circles) are the coordinates (X r , Y t ), (X r , Y b ), (X l , Y t ), and (X l , Y b ). Then, Sat (x, y) can be obtained as shown in Formula III.
- Formula I is the sat of an image (x, y), the input data (i (x, y)) of that image (x, y), and the sat data (sat) of which the x coordinate is one smaller than that pixel.
- sat (X, Y) i (X, Y) + sat (X ⁇ 1, Y) + sat (X, Y ⁇ 1) ⁇ sat (X ⁇ 1, Y ⁇ 1).
- X-1 -1
- sat (X-1, Y) and sat (X-1, Y-1) are 0, and Y-1 is -1.
- the values of w and h may be obtained using the blur level. That is, the larger W is, the more blurred image processing may be performed in a wider area.
- the relationship between W and w and h may be stored in a one-dimensional lookup table, for example, and w and h may be obtained based on the inputted W.
- the value of w ⁇ h required for the arithmetic processing may be stored together in the lookup table, and the multiplication value may be read from the lookup table without multiplying w and h during the computation.
- a blurred value of these values is obtained.
- a blurred image can be obtained by similarly obtaining blurred image data for the entire screen or at least a point in a certain area.
- sat (x, y) in the first row is obtained.
- y 0.
- x is the x coordinate of the pixel from 0 to the right end.
- the obtained sat (x, 0) on the first line is stored in the front cache of the first cache.
- i (x, y) may be, for example, the value R i (x, y) of red R at the coordinates (x, y), or green G i (x, y), It may be blue B i (x, y) or transparency ⁇ i (x, y).
- the second row sat (x, 1) is obtained. Then, the obtained sat (x, 1) on the second line is stored in the back cache of the first cache.
- the input i (0, 1) is stored in the storage unit. Then, i (0, 1), sat (-1, 1), sat (0, 0), and sat (-1, 0) are read from the storage unit. Since sat ( ⁇ 1,1) and sat ( ⁇ 1,0) are 0, this value is stored and i (0,1), sat ( ⁇ 1,1), sat (0,0) ) And sat ( ⁇ 1, 0) may be added or subtracted to find sat (0, 1).
- sat (x, 2) on the third row is obtained using the input data i (x, 2) and sat (x, 1) on the second row. Then, the obtained sat (x, 2) on the third line is stored in the front cache of the first cache. At this time, since the sat (x, 2) in the third row is overwritten with the sat (x, 0) in the first row, the data related to the sat (x, 0) in the first row is the front cache of the first cache. Is erased from
- sat (X r , Y t ) and sat (X l , Y t ) exist in sat (x, 2) in the third row. Therefore, the values of sat (X r , Y t ) and sat (X l , Y t ) can be read from the front cache of the first cache.
- the sat (x, 3) in the fourth row is obtained.
- the obtained sat (x, 3) is stored in the front cache of the second cache.
- sat (x, 1) will not be used in the future.
- the back cache of the first cache may be erased and used as a memory space for other operations until the blurred image of the next line is obtained.
- the sat calculation means uses the mth row of input image data i (X, m ⁇ 1) and the sat calculation result of the (m ⁇ 1) th row stored in the back cache of the first cache. Then, sat (X, m-1) which is the sat value of the m-th row is obtained using sat (X, m-2). Then, one of the caches updates the stored information using sat (X, m ⁇ 1) which is the result of sat calculation of the m-th row. This operation is repeated until m becomes Y t ⁇ 2.
- Y b ) is obtained up to sat (x, Y b ).
- sat (x, Y b ) after (x, Y b +1) may be obtained and used for the subsequent calculation.
- the obtained sat (x, Y b ) is stored in the back cache of the second cache.
- sat (x, Y b -2) data related is erased from the back cache of the second cache.
- sat (X r , Y t ) and sat (X l , Y t ) can be read from the front cache of the first cache.
- sat (X r , Y b ) and sat (X l , Y b ) can be read from the back cache of the second cache.
- the blurred image generation system of the present invention from these caches, sat (X r , Y t ), sat (X l , Y t ), sat (X r , Y b ) and sat (X l , Y b). Then, for example, using a code converter and an adder, a calculation process for obtaining Sat (x, y) according to the formula II is performed.
- this operation control unit which has received the instruction of the control program, from the cache sat (X r, Y t) , sat (X l, Y t), sat (X r, Y b) and sat (X l , Y b ), and let the arithmetic unit obtain sat (X r , Y t ) ⁇ sat (X r , Y b ) ⁇ sat (X l , Y t ) + sat (X l , Y b ). It may be done by doing. The same applies hereinafter.
- the blurred image data at the coordinates (x, y) can be obtained according to the formula III.
- blurred image data examples include R, G, and B data indicating color and ⁇ (alpha) data indicating transparency.
- the blurred image data of the point (x, y + 1) can be obtained. Since sat (X r +1, Y t ), sat (X l +1, Y t ), and sat (X r +1, Y b ) have already been obtained, they may be appropriately read from the cache.
- the value of sat (X 1 +1, Y b ) may be read.
- the value of sat (X l + 1, Y b) is not required, according to Equation 2, i (X l + 1 , Y b), sat (X l, Y b), sat (X l + 1, Y b ⁇ 1) and sat (X 1 , Y b ⁇ 1) are used to determine the value of sat (X 1 +1, Y b ) and store it in the back cache of the second cache.
- sat (x, 3) in the fourth row is obtained.
- the sat (x, 3) can be obtained using the input data i (x, 3) and the sat (x, 2) in the third row. That is, sat (x, 2) on the third line is stored in the front cache of the first cache.
- the blurred image generation system of the present invention obtains sat (x, 3) on the fourth line using the input i (x, 3) and sat (x, 2) read from the cache. Then, the obtained sat (x, 3) on the fourth line is stored in the back cache of the first cache.
- the sat (x, Y b +1) of the row to which (X r , Y b +1) and (X l , Y b +1) belong can also be obtained in the same manner as described above. Then, it can be understood that the blurred image data of the next row after the point (x, y) can be obtained in the same manner as described above.
- a second embodiment of the present invention will be described.
- the second embodiment further uses mask technology.
- a blurred image of a target that is not a target of focus can be obtained.
- a blur coefficient of 0 which will be described later does not participate in the blurred image processing.
- This blurring coefficient can be stored in a storage area such as an alpha channel of the image.
- the blur coefficient is expressed as w (x, y).
- FIG. 4 is a block diagram of a system according to the second embodiment.
- the system includes a coefficient determination unit 11 and a blurred image acquisition unit 12.
- the blurred image acquisition unit 12 includes an input image data input unit 20, a sat calculation unit 21, caches 22 and 23, a Sat calculation unit 24, and a blurred image data acquisition unit 25.
- the cache includes a first cache 22 and a second cache 23, the first cache 22 includes a front cache 22a and a back cache 22b, and the second cache 23 includes a front cache 23a and a back cache 23b. including.
- Reference numeral 27 in FIG. 4 indicates output means.
- the elements shown in FIG. 4 merely show necessary elements, and this system can appropriately employ elements used in a normal computer graphic apparatus.
- Each unit may be implemented by hardware such as a circuit, a chip, or a core, or may be implemented by cooperation of hardware and software.
- the coefficient determination unit is an element for determining w (x, y) in the following formula Ia.
- the blurred image acquisition unit can appropriately use the elements according to the first embodiment described above.
- w when blurring the entire graphics, w may be set to 1 for all (x, y). Further, by reducing the value of w, the influence of the input original and the image data can be reduced.
- Formula I can be expressed as Formula Ia.
- sat (X, Y) w (X, Y) ⁇ i (X, Y) + sat (X ⁇ 1, Y) ⁇ sat (X, Y ⁇ 1) ⁇ sat (X ⁇ 1, Y ⁇ 1). ..Formula Ia However, in the formula Ia, when X-1 is -1, sat (X-1, Y) and sat (X-1, Y-1) are set to 0, and Y-1 is -1. Sets sat (X, Y-1) and sat (X-1, Y-1) to 0.
- I Blur (x, y) that is a blurred image value is expressed as in the following formula III or formula IIIa. It may be possible to select Formula III or Formula IIIa as appropriate based on information input to the system.
- w ⁇ h in the formula III is replaced with the sum of all values included in the square region having the width w and the height h. That is, W is included in a square with four points (X r , Y t ), (X r , Y b ), (X l , Y t ), and (X l , Y b ) as vertices.
- x, y) is the sum of the values.
- the value of w (x, y) contained in the above square may be added.
- the sum of w may be obtained for a region where the x coordinate is ⁇ 5 to +5 and the y coordinate is +5 to ⁇ 5 from a certain point.
- the blurring process is changed according to the depth value z or the pixel depth. That is, a value related to the depth value z or the depth d of each pixel is input to the coefficient determination unit 11 in FIG.
- This system stores either or both of the threshold A and the threshold B in the memory.
- the coefficient determination unit 11 reads out one or both of the threshold value A and the threshold value B from the memory and compares them with the depth value z or the depth d.
- the value of w may be read from a lookup table and used as w (x, y).
- w (x, y) is 0 to 1, an arithmetic process for obtaining the value may be performed.
- a threshold A of a certain distance is given. Then, those having a depth value exceeding the threshold value are out of focus and are completely subject to blurring processing.
- a threshold B of a certain distance is given. Then, objects closer to this threshold value are the target of full attention. For an object between these two thresholds, it can be said that a transition from attention to non-focus occurs, that is, a transition from not blur at all to completely blur.
- an object is rendered in a frame buffer that includes a color buffer (color image) and a depth buffer (with depth information from the camera).
- the visible region from the camera is defined by the previous clipping plane and the far clipping plane.
- two threshold values A and B are used. These two threshold values correspond to the previous clipping plane and the far clipping plane.
- w is set to 1.
- w is set to 0.
- Threshold values A and B are stored in the memory. Then, the value of A or B stored in the memory is read out, and the depth value input from the previous module is compared with A and B. If the depth value is greater than A, w is set to 1. When the depth value is smaller than B, w is set to 0.
- w may be continuously changed with a value larger than 0 and smaller than 1.
- a weighted average value from threshold B to threshold A may be assigned to a numerical value from 0 to 1.
- the blur coefficient w may be an intermediate value from 0 to 1.
- a mask value that is a set of the blurring coefficients w may be obtained by a preprocessing module.
- the fourth embodiment provides a glowing process. That is, an object included in a certain graphic is specified and a blurred image is acquired.
- the identification number of the object included in the computer graphic is input to the coefficient determination unit 11. This is the stencil value. Then, in accordance with the identification number of the target object input from the stencil buffer, it is determined whether or not to perform the blurring process. In other words, information related to the blurring processing object is input to the coefficient determination unit 11. Then, using the identification number of the object, it is determined whether the object is to be blurred or not. For example, w is set to 0 for i (x, y) indicating a non-blurring target. Alternatively, data that is not to be blurred may be passed through the blurred image acquisition unit 12 so that data is sent to the next module.
- information related to the blurring target may be input to the coefficient determination unit 11.
- the coefficient determination unit 11 uses the identification number of the object to determine whether the object is to be blurred or not. For example, for i (x, y) indicating an object to be blurred, w is set to 1. In this way, only a specific object can be blurred.
- a blurred image can be acquired by appropriately using the system configuration of the first to third embodiments.
- the fifth embodiment relates to a system capable of performing image processing for obtaining a silhouette.
- the glowing object drawing method By using the glowing object drawing method according to the fourth embodiment, a silhouette of an object can be easily obtained. If the object identification number can be obtained, mask data can be obtained.
- the mask itself is blurred.
- the blurred area is slightly larger than the original object due to the blurring process. For this reason, the area subjected to the blurring process includes an area having an identification number different from the identification number of the original object.
- the mask is blurred and the color value and the blurred mask are adjusted. According to this, for example, it is possible to easily perform image processing such as shining around the main character or weapon in the game.
- the identification number of the object included in the computer graphic is input to the coefficient determination unit 11.
- image data When image data is input, the image data indicating the object is specified using the identification number of the object.
- a set of the specified image data (x, y) is a mask. Then, setting values such as the color, transparency, and luminance of the mask are read out as appropriate.
- the mask image data of the new object is output to the blurred image acquisition unit 12. Then, the blurred image acquisition unit 12 acquires a blurred image of the mask image data and outputs it to the next module. In the next module, image composition processing is performed.
- Example 1 relates to an example of an image generation system based on the first embodiment.
- 5 and 6 are circuit diagrams for implementing the blurred image generation system of the present invention. This circuit can also be implemented by software.
- RBX is the blur size ((w ⁇ 1) / 2) in the x-axis direction.
- RBY is the blur size ((h ⁇ 1) / 2) in the y-axis direction.
- RTX is the width (W or W-1) of the input image.
- RTY is the height (H or H-1) of the input image.
- RTBX is the sum of RTX and RTY.
- FIGS. 5 and 6 An operation for obtaining a blurred image by the blurred image generation system of the present invention shown in FIGS. 5 and 6 will be described.
- Circuit of Figure 5 determines the coordinates of (X r, Y t), (X r, Y b), (X l, Y t) and (X l, Y b).
- X 1 when X 1 is obtained, it can be obtained by adding the blur width RBX in the x-axis direction (2: ((5-1) / 2) in the case of FIG. 3) to x of coordinates (x, y). it can.
- the clipping circuit compares the added value with RTX, and if the added value is 0 or more and RTX or less, the added value is directly output as XP.
- the clipping circuit outputs RTX as XP when the added value is equal to or greater than RTX.
- the clipping circuit outputs 0 as XP.
- a clipping circuit different from the above compares the added value with RTBX, and if the added value is 0 or more and RTBX or less, the added value is output as XAP as it is.
- the clipping circuit outputs RTBX as XAP when the added value is equal to or greater than RTBX.
- the clipping circuit outputs 0 as XAP.
- the circuit shown in FIG. 5 outputs XP, XAP, XM, XAM, YM, and YP.
- flag means a flag and designates a cache.
- LL indicates the lower left
- LR indicates the lower right
- UL indicates the upper left
- UR indicates the upper right.
- the sat in the white circle LL can be obtained using the input value (TGBA) in the white circle LL, the sat in the left coordinate of the white circle LL, the sat in the upper coordinate, and the sat in the upper left corner.
- the coordinates of the left point and the diagonally upper left point of the white circle LL are XP-1.
- the addition and subtraction of the values i and sat may be performed according to Equation 2, and can be realized using the adder and subtracter shown in FIG. Note that the subtractor may be realized by using a sign conversion circuit and an adder.
- the multiplication value w ⁇ h of w and h is obtained by using a multiplier as shown in FIG. Then, 1 / w ⁇ h is obtained by an inverse circuit (1 / x). Then, Sat (x, y) / w ⁇ h can be obtained by multiplying Sat (x, y) and 1 / w ⁇ h by a multiplier. The value of Sat (x, y) / w ⁇ h for R, G, B and ⁇ is output as a blurred image at the point (x, y).
- Second Embodiment A second embodiment based on the second embodiment of the present invention will be described.
- the second embodiment further uses mask technology.
- a blurred image of a point that is not a target of attention can be obtained.
- a blur coefficient of 0 which will be described later does not participate in the blurred image processing.
- This blurring coefficient can be stored in a storage area such as an alpha channel of the image.
- the blur coefficient is expressed as w (x, y).
- FIG. 7 is a block diagram showing a hardware configuration for obtaining the blurred image technique of the present invention.
- a module that performs blurred image processing is mounted on a PostEffect (post-effect) core.
- the slave I / F is an interface module for transmitting and receiving commands and data. This interface exchanges information with other system components.
- the master bus arbitration circuit (Arbiter) is a circuit for adjusting the access to the external memory and the writing / reading conflict in the storage unit. Examples of external memory are color buffers and depth buffers.
- the block / line conversion circuit is a module for converting the storage format of the memory data.
- the screen control circuit is a module for controlling image information output to the monitor.
- the post-effect core is a module that performs image processing and includes a blurred image processing module.
- FIG. 8 is a block diagram for explaining the post-effect core.
- the slave I / F module receives commands from other modules including, for example, the slave I / F of FIG. Then, X and Y coordinates are generated according to the scan line algorithm. For example, each line of the input image is processed in order, and each line-like point is sequentially generated.
- FIG. 9 is a diagram for explaining pixel positions of P t , P xy , and P b .
- P t , P xy , and P b are as shown in FIG.
- additional coordinates T xy may be generated for further texture access.
- the next module is a color, depth, and stencil readout circuit.
- This module performs processing using P t , P xy , and P b described above.
- This module works with the master bus arbitration circuit to access various memories. Then, typically RGB and alpha (A) values, stencil and depth values are read out. The read value is transmitted to the blur module.
- the result of the blur processing output from the blur calculation module is used as input to the post-processing module.
- This module performs various operations. Examples include blending, alpha, depth or stencil testing, and converting colors to luminance.
- the output color information is stored in the write memory. This memory stores the obtained color information in the memory using a master bus arbitration circuit.
- FIG. 10 is a block diagram of a hardware implementation for realizing the blur calculation process. This module does not process P xy and T xy but simply passes it to the next module, the post-processing circuit.
- SRAM0t SRAM1t
- SRAM0b SRAM1b
- SRAM1b SRAM1b
- Equation I Information necessary for the calculation of Equation I is read from these SRAMs, and the input value i (x, y) is obtained as input from the previous module.
- sat (X r , Y t ) and sat (X r , Y b ) are obtained as outputs of the two summing modules, and sat (X l , Y t ) and sat (X l , Y b ) are obtained from the SRAM. It can be obtained by reading.
- These values are then subjected to the calculations of equations II and III in the summation module and the division module. The result Bxy is transmitted to the next module.
- the value of w (x, y) is calculated twice, for example, top and bottom inputs.
- w (x, y) may be obtained based on depth, alpha, stencil test, color to luminance conversion, texture input using appropriate switch selection, and the like.
- a typical initial value of i (x, y) is an RGBA input value.
- R′G′B′W is output.
- R′G ′ and B ′ are values obtained by correcting input RGB values
- W is a weight value. What is found is used in the summing module. In the summation module and the division module, the weight value is obtained as being equal to the following expression.
- W is a value obtained as a result of the sat operation, and is an output of the summing module. W may be a read result of the SRAM.
- Embodiment in which the blurring coefficient is changed based on the depth is based on the third embodiment.
- the blurring process is changed based on the depth value z or the distance (depth) d from the camera.
- the depth of the field effect can be obtained as follows. For a very deep field, for example, define a fairly deep field as follows: Assume that a scene is obtained using a camera. Assume that a threshold A of a certain distance is given. Then, those having a depth value exceeding the threshold value are out of focus and are completely subject to blurring processing. Suppose that a threshold B of a certain distance is given. Then, objects closer to this threshold value are the target of full attention. For an object between these two thresholds, it can be said that a transition from attention to non-focus occurs, that is, a transition from not blur at all to completely blur.
- an object is rendered in a frame buffer that includes a color buffer (color image) and a depth buffer (with depth information from the camera).
- the visible region from the camera is defined by the previous clipping plane and the far clipping plane.
- two threshold values A and B are used. These two threshold values correspond to the previous clipping plane and the far clipping plane.
- w is set to 1.
- w is set to 0.
- Threshold values A and B are stored in the memory. Then, the value of A or B stored in the memory is read out, and the depth value input from the previous module is compared with A and B. If the depth value is greater than A, w is set to 1. When the depth value is smaller than B, w is set to 0.
- w may be continuously changed with a value larger than 0 and smaller than 1.
- a weighted average value from threshold B to threshold A may be assigned to a numerical value from 0 to 1.
- the blur coefficient w may be an intermediate value from 0 to 1.
- a mask value that is a set of the blurring coefficients w may be obtained by a preprocessing module.
- FIG. 11 (a) shows an image in which only a target near the viewpoint is extracted from a certain scene.
- This black part is a Boolean function mask.
- the depth value of each pixel data is read out, while the threshold value is read out and compared. Then, it is only necessary to extract those whose depth value is equal to or less than a predetermined value.
- a set of only those whose depth values are equal to or smaller than a predetermined value is the Boolean function mask shown in FIG.
- FIG. 11B is a diagram showing a blurred image using the mask of FIG.
- FIG. 11C is a diagram illustrating a smooth mask using the threshold A and the threshold B.
- FIG. 12 is a diagram showing an image synthesized using the mask of FIG. 11C without blurring the mask target.
- a computer graphic rendering object is rendered in a frame buffer that includes a color buffer that stores a color image and a stencil buffer that stores a stencil value associated with an object identification number.
- FIG. 13 shows an original image before performing the glowing process in the fourth embodiment.
- three lanterns are included as glowing objects.
- a glowing effect is provided around three lanterns.
- the first step in this method is to define a mask.
- the value of w is 1.
- only the lantern can be blurred by using the blurring technique of the first embodiment and the blurring technique of the second embodiment together.
- FIG. 14 is a diagram showing a collection of input image areas having stencil values of lantern objects.
- FIG. 15 shows the lantern object image subjected to the blurring process and blended with the original image.
- FIG. 15 shows an image obtained in Example 4.
- FIG. 16 shows the result.
- FIG. 16 is for obtaining a samurai image. As shown in FIG. 16, only the silhouette that is the outline of the samurai can be obtained. This output value is exactly a Boolean function value based on the object identification number test.
- FIG. 17 shows a blurred mask. Then, the mask and the original image are adjusted.
- FIG. 18 is a diagram showing an image obtained in Example 5.
- the present invention is a system for obtaining a blurred image. Therefore, it can be used in the field of computer graphics.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
Description
IBlur(x,y)=Sat(x,y)/w×h ・・・・式III
ここで,Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式IIである。
なお,yの座標値において,tはトップ,bはボトム(底)を意味する。
sat(X,Y)=i(X,Y)+sat(X-1,Y)+sat(X,Y-1)-sat(X-1,Y-1)・・・・式I
ただし,式Iにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とする。Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
sat(X,Y)=i(X,Y)+sat(X-1,Y)+sat(X,Y-1)-sat(X-1,Y-1)・・・・式I
ただし,式Iにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とする。Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
IBlur(x,y)=Sat(x,y)/w×h ・・・・式III
座標(x,y)における入力画像データをi(x,y)とし,
座標(x,y)におけるぼかし画像データをIBlur(x,y)とし,
座標(x,y)から,x座標が所定値異なり,y座標が所定値異なる点を,(Xr,Yt),(Xr,Yb),(Xl,Yt)及び(Xl,Yb)とし,
l(エル)>rとし,b>tとし,
ある点(X,Y)について,以下の式Iaを満たす値をsat(X,Y)とし,
以下の式IIを満たす値をSat(x,y)とし,
Xl-Xrをw,Yb-Ytをhとする。
座標(X,Y)における入力画像データi(X,Y)を受け取るとともに,係数決定部11からぼかし係数w(X,Y)を受け取るための入力画像データ入力手段20と,
入力画像データ入力手段が受け取った入力画像データi(X,Y)を用いて式Iaで示されるsatの値を求めるためのsat演算手段21と,
sat演算手段が求めた行ごとのsat演算結果を記憶するためのキャッシュ22,23と,
キャッシュから,sat(Xr,Yt),sat(Xr,Yb),sat(Xl,Yt)及びsat(Xl,Yb)の値を読み出して,Sat(x,y)の値を求めるためのSat演算手段24と,
Sat演算手段が求めたSat(x,y)の値を用いて,IBlur(x,y)を用いるぼかし画像データ取得手段25と,を有する。
第1のキャッシュは,フロントキャッシュ22a及びバックキャッシュ22bを含み,
第2のキャッシュは,フロントキャッシュ23a及びバックキャッシュ23bを含む。
1行目の入力画像データi(X,0)を用いて,1行目のsat値であるsat(X,0)を求め,
第1のキャッシュのフロントキャッシュは,
1行目のsat演算結果であるsat(X,0)を記憶する。
2行目の入力画像データi(X,1)と,第1のキャッシュのフロントキャッシュが記憶した1行目のsat演算結果であるsat(X,0)を用いて,2行目のsat値であるsat(X,1)を求め,
第1のキャッシュのバックキャッシュは,
2行目のsat演算結果であるsat(X,1)を記憶する。
3行目の入力画像データi(X,2)と,第1のキャッシュのバックキャッシュが記憶した2行目のsat演算結果であるsat(X,1)を用いて,3行目のsat値であるsat(X,3)を求め,
第1のキャッシュのフロントキャッシュは,3行目のsat演算結果であるsat(X,2)を用いて記憶情報を更新する。
Yt行目の入力画像データi(X,Yt+1)と,キャッシュから読み出したsat(X,Yt)とを用いて,Yt行目のsat演算結果であるsat(X,Yt+1)を求め,
第2のキャッシュのフロントキャッシュは,
Yt行目のsat演算結果であるsat(X,Yt+1)を記憶する。
Yt+1行目の入力画像データi(X,Yt+2)と,第2のキャッシュのフロントキャッシュから読み出したsat(X,Yt+1)とを用いて,Yt+1行目のsat演算結果であるsat(X,Yt+2)を求め,
第2のキャッシュのバックキャッシュは,
Yt+1行目のsat演算結果であるsat(X,Yt+2)を記憶する。
Yt+2行目の入力画像データi(X,Yt+3)と,第2のキャッシュのバックキャッシュから読み出したsat(X,Yt+2)とを用いて,Yt+2行目のsat演算結果であるsat(X,Yt+3)を求め,
第2のキャッシュのフロントキャッシュは,
Yt+2行目のsat演算結果であるsat(X,Yt+3)を用いて記録情報を更新する。
(Yb-2)行目のsat演算結果であるsat(X,Yb-1)を,第2のキャッシュのフロントキャッシュ及び第2のキャッシュのバックキャッシュのいずれかが記憶する。
ただし,式Iaにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
入力画像データi(X,Y)の奥行きz(X,Y)又は深さd(X,Y)に関する情報が入力される入力手段と,
第1の閾値A及び第2の閾値Bが入力される閾値入力手段と,
奥行きz(X,Y)又は深さd(X,Y)と第1の閾値A及び第2の閾値Bとを比較する比較演算手段と,
比較演算手段の比較結果に応じて,係数w(X,Y)を決定する係数決定手段とを有し,
係数決定手段は,奥行きz(X,Y)又は深さd(X,Y)が第1の閾値A以上の場合は,w(X,Y)を0とし,
奥行きz(X,Y)又は深さd(X,Y)が第2の閾値B以下の場合は,w(X,Y)を1とし,
奥行きz(X,Y)又は深さd(X,Y)が第2の閾値Bより大きく第1の閾値Aより小さい場合は,w(X,Y)を0から1の間の値とするように演算処理を行うものである。
入力画像データi(X,Y),及びコンピュータブラフィックスにおける対象物の識別番号が入力されるとともに,ぼかし画像処理の対象となる対象物に関する情報が入力される対象物入力手段と,
ぼかし画像処理の対象となる対象物に関する情報,対象物の識別番号及び入力画像データi(X,Y)を用いて,ぼかし画像処理の対象となる対象物を把握し,このぼかし画像対象となる対象物に関するぼかし係数w(X,Y)を1とする係数決定手段とを有する,
システムである。
係数決定部11は,
入力画像データi(X,Y),及びコンピュータブラフィックスにおける対象物の識別番号が入力されるとともに,ぼかし画像処理の対象となる対象物に関する情報が入力される対象物入力手段と,
対象物入力手段により特定された対象物の画素データ部分についてマスク領域を得るマスク領域取得手段と,
マスク領域取得手段が取得したマスク領域の新たな画像データを得るマスク領域の画像データ取得手段と,
マスク領域の係数w(x,y)を1と決定する係数決定手段と,
を有し,
ぼかし画像取得部12は,
係数決定部11が決定したマスク領域に関して,
係数w(x,y)=1を受け取るとともに,新たな画像データを入力画像データとして受け取り,
対象物のマスクのぼかし画像を得る,
システムである。
以下,本発明の第1の実施態様について説明する。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
ただし,式Iにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
本発明の第2の実施態様について説明する。第2の実施態様は,マスク技術を更に用いたものである。第2の実施態様では,たとえば,注目(フォーカス)対象となっていない対象のぼかし画像を得ることができる。たとえば,後述するぼかし係数が0のものは,ぼかし画像処理に参加しない点である。このぼかし係数は,たとえば画像のアルファチャネルなどの記憶領域に格納できる。以下の説明では,ぼかし係数をw(x,y)と表現する。
ただし,式Iaにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
IBlur(x,y)=Sat(x,y)/W ・・・・式IIIa
次に第3の実施態様について説明する。この実施態様は,奥行き値z又は画素の深さに応じてぼかし処理を変化させるものである。すなわち,図4における係数決定部11に各画素の奥行き値z又は深さdに関する値が入力される。そして,このシステムは,メモリに閾値A及び閾値Bのいずれか又は両方を記憶している。係数決定部11は,メモリから閾値A及び閾値Bの値のいずれか又は両方を読み出して,奥行き値z又は深さdと比較する。そのうえで,たとえば,ルックアップテーブルからwの値を読み出して,w(x,y)としてもよい。また,w(x,y)が0から1の場合は,その値をもとめる演算処理を行ってもよい。
第4の実施態様は,グローイング処理をもたらすものである。すなわち,あるグラフィック中に含まれる対象物を特定して,ぼかし画像を取得するものである。
本発明の第2の実施態様に基づく実施例2について説明する。第2の実施態様は,マスク技術を更に用いたものである。第2の実施態様では,たとえば,注目対象となっていない点のぼかし画像を得ることができる。たとえば,後述するぼかし係数が0のものは,ぼかし画像処理に参加しない点である。このぼかし係数は,たとえば画像のアルファチャネルなどの記憶領域に格納できる。以下の説明では,ぼかし係数をw(x,y)と表現する。
この実施例は,第3の実施態様に基づくものである。この実施例3では,奥行き値zまたはカメラからの距離(深さ)dに基づいて,ぼかし処理を変化させるものである。フィールド効果の深さは,以下のようにして求めることができる。かなり深いフィールドの場合は,たとえば,以下のようにしてかなり深いフィールドを定義する。カメラを用いてシーンが得られたとする。ある距離の閾値Aが与えられたとする。するとこの閾値を超える深さの値を有するものは注目から外れものであり,完全にぼやかし処理の対象となる。ある距離の閾値Bが与えられたとする。するとこの閾値より近いものは完全に注目される対象となる。この2つの閾値の間の物体については,注目から注目しないものへの遷移が起こるといえ,すなわち全くぼかさないものから完全にぼかすものへの遷移が起こるといえる。
以下の例では,コンピュータグラフィックの描画対象は,色像を記憶するカラーバッファと,物体の識別番号と関連したステンシル値を記憶するステンシルバッファとを含むフレームバッファにレンダリングされるとする。
上記のグローイングオブジェクトの描画方法を用いることで,あるオブジェクトのシルエットを簡単に得ることができる。オブジェクトの識別番号を入手できれば,マスクデータを得ることができる。オブジェクトのマスクデータとして光に関する画像データを用いる。そして,この態様では,そのマスク自体をぼかす。ぼかした領域は,もともとの対象よりもぼかし処理のため少し大きくなる。このためぼかし処理を施した領域には,もとのオブジェクトの識別番号と異なる識別番号を有することとなるものが含まれる。ポストプロセッシングにおいては,たとえば,異なるオブジェクト識別番号を有するフラグメントのみをメモリに出力するようにすればよい。図16はその結果を示す。図16はサムライの画像を得るためのものである。図16に示されるように,サムライの輪郭であるシルエットのみを得ることができている。この出力値は,まさにオブジェクトの識別番号テストに基づくブーリアン関数値(Boolean value)である。
Claims (10)
- ぼかし画像データを得る画素の座標を(x,y)とし,
前記座標(x,y)における入力画像データをi(x,y)とし,
前記座標(x,y)におけるぼかし画像データをIBlur(x,y)とし,
前記座標(x,y)から,x座標が所定値異なり,y座標が所定値異なる点を,(Xr,Yt),(Xr,Yb),(Xl,Yt)及び(Xl,Yb)とし,
l(エル)>rとし,b>tとし,
ある点(X,Y)について,以下の式Iを満たす値をsat(X,Y)とし,
以下の式IIを満たす値をSat(x,y)とし,
Xl-Xrをw,Yb-Ytをhとし,
前記IBlur(x,y)を,以下の式IIIで表される値としたときに,
座標(X,Y)における入力画像データi(X,Y)を受け取るための入力画像データ入力手段(20)と,
前記入力画像データ入力手段が受け取った入力画像データi(X,Y)を用いて式Iで示されるsatの値を求めるためのsat演算手段(21)と,
前記sat演算手段が求めた行ごとのsat演算結果を記憶するためのキャッシュ(22,23)と,
前記キャッシュから,sat(Xr,Yt),sat(Xr,Yb),sat(Xl,Yt)及びsat(Xl,Yb)の値を読み出して,Sat(x,y)の値を求めるためのSat演算手段(24)と,
前記Sat演算手段が求めたSat(x,y)の値,前記w及び前記hを用いて,IBlur(x,y)を用いるぼかし画像データ取得手段(25)と,
を有する,
コンピュータグラフィックス用のぼかし画像データを得るためのシステムであって,
前記キャッシュは,第1のキャッシュ(22)及び第2のキャッシュ(23)を含み,
前記第1のキャッシュは,フロントキャッシュ(22a)及びバックキャッシュ(22b)を含み,
前記第2のキャッシュは,フロントキャッシュ(23a)及びバックキャッシュ(23b)を含み,
前記sat演算手段は,
1行目の入力画像データi(X,0)を用いて,1行目のsat値であるsat(X,0)を求め,
前記第1のキャッシュのフロントキャッシュは,
前記1行目のsat演算結果であるsat(X,0)を記憶し,
前記sat演算手段は,
2行目の入力画像データi(X,1)と,前記第1のキャッシュのフロントキャッシュが記憶した1行目のsat演算結果であるsat(X,0)を用いて,2行目のsat値であるsat(X,1)を求め,
前記第1のキャッシュのバックキャッシュは,
前記2行目のsat演算結果であるsat(X,1)を記憶し,
前記sat演算手段は,
3行目の入力画像データi(X,2)と,前記第1のキャッシュのバックキャッシュが記憶した2行目のsat演算結果であるsat(X,1)を用いて,3行目のsat値であるsat(X,3)を求め,
前記第1のキャッシュのフロントキャッシュは,3行目のsat演算結果であるsat(X,2)を用いて記憶情報を更新し,
以下同様の演算を繰り返し,
(Yt-2)行目のsat演算結果であるsat(X,Yt-1)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのいずれかが記憶し,
(Yt-1)行目のsat演算結果であるsat(X,Yt)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのうち,前記sat(X,Yt-1)を記憶していない方が記憶し,
前記sat演算手段は,
Yt行目の入力画像データi(X,Yt+1)と,前記キャッシュから読み出した前記sat(X,Yt)とを用いて,Yt行目のsat演算結果であるsat(X,Yt+1)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt行目のsat演算結果であるsat(X,Yt+1)を記憶し,
前記sat演算手段は,
Yt+1行目の入力画像データi(X,Yt+2)と,前記第2のキャッシュのフロントキャッシュから読み出した前記sat(X,Yt+1)とを用いて,Yt+1行目のsat演算結果であるsat(X,Yt+2)を求め,
前記第2のキャッシュのバックキャッシュは,
前記Yt+1行目のsat演算結果であるsat(X,Yt+2)を記憶し,
前記sat演算手段は,
Yt+2行目の入力画像データi(X,Yt+3)と,前記第2のキャッシュのバックキャッシュから読み出した前記sat(X,Yt+2)とを用いて,Yt+2行目のsat演算結果であるsat(X,Yt+3)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt+2行目のsat演算結果であるsat(X,Yt+3)を用いて記録情報を更新し,
以下同様の演算を繰り返し,
(Yb-2)行目のsat演算結果であるsat(X,Yb-1)を,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのいずれかが記憶し,
(Yb-1)行目のsat演算結果であるsat(X,Yb)のうち少なくともX=0からX=Xlを,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのうち,前記sat(X,Yb-1)を記憶していない方が記憶する,
システム。
sat(X,Y)=i(X,Y)+sat(X-1,Y)+sat(X,Y-1)-sat(X-1,Y-1)・・・・式I
ただし,式Iにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
IBlur(x,y)=Sat(x,y)/w×h ・・・・式III
- 前記i(x,y)は,色を示すR,G又はBのいずれかの値である,請求項1に記載のシステム。
- 前記i(x,y)は,色を示すR,G又はBのいずれかの値であるか,透明度を示すα(アルファ)値である,請求項1に記載のシステム。
- 前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのうち前記sat(X,Yt-1)を記憶したキャッシュを,Yt行目のsat演算処理から前記IBlur(x,y)を求める処理までの間,ぼかし画像データを得る以外の処理のためのキャッシュとして利用する,請求項1に記載のシステム。
- 係数決定部11と,ぼかし画像取得部12とを有するコンピュータグラフィックス用のぼかし画像データを得るためのシステムであって,
ぼかし画像データを得る画素の座標を(x,y)とし,
前記座標(x,y)における入力画像データをi(x,y)とし,
前記座標(x,y)におけるぼかし画像データをIBlur(x,y)とし,
前記座標(x,y)から,x座標が所定値異なり,y座標が所定値異なる点を,(Xr,Yt),(Xr,Yb),(Xl,Yt)及び(Xl,Yb)とし,
l(エル)>rとし,b>tとし,
ある点(X,Y)について,以下の式Iaを満たす値をsat(X,Y)とし,
以下の式IIを満たす値をSat(x,y)とし,
Xl-Xrをw,Yb-Ytをhとし,
前記IBlur(x,y)を,以下の式III又は式IIIaで表される値としたときに,
前記ぼかし画像取得部12は,
座標(X,Y)における入力画像データi(X,Y)を受け取るとともに,前記係数決定部11からぼかし係数w(X,Y)を受け取るための入力画像データ入力手段(20)と,
前記入力画像データ入力手段が受け取った入力画像データi(X,Y)を用いて式Iaで示されるsatの値を求めるためのsat演算手段(21)と,
前記sat演算手段が求めた行ごとのsat演算結果を記憶するためのキャッシュ(22,23)と,
前記キャッシュから,sat(Xr,Yt),sat(Xr,Yb),sat(Xl,Yt)及びsat(Xl,Yb)の値を読み出して,Sat(x,y)の値を求めるためのSat演算手段(24)と,
前記Sat演算手段が求めたSat(x,y)の値を用いて,IBlur(x,y)を用いるぼかし画像データ取得手段(25)と,
を有する,
コンピュータグラフィックス用のぼかし画像データを得るためのシステムであって,
前記キャッシュは,第1のキャッシュ(22)及び第2のキャッシュ(23)を含み,
前記第1のキャッシュは,フロントキャッシュ(22a)及びバックキャッシュ(22b)を含み,
前記第2のキャッシュは,フロントキャッシュ(23a)及びバックキャッシュ(23b)を含み,
前記sat演算手段は,
1行目の入力画像データi(X,0)を用いて,1行目のsat値であるsat(X,0)を求め,
前記第1のキャッシュのフロントキャッシュは,
前記1行目のsat演算結果であるsat(X,0)を記憶し,
前記sat演算手段は,
2行目の入力画像データi(X,1)と,前記第1のキャッシュのフロントキャッシュが記憶した1行目のsat演算結果であるsat(X,0)を用いて,2行目のsat値であるsat(X,1)を求め,
前記第1のキャッシュのバックキャッシュは,
前記2行目のsat演算結果であるsat(X,1)を記憶し,
前記sat演算手段は,
3行目の入力画像データi(X,2)と,前記第1のキャッシュのバックキャッシュが記憶した2行目のsat演算結果であるsat(X,1)を用いて,3行目のsat値であるsat(X,3)を求め,
前記第1のキャッシュのフロントキャッシュは,3行目のsat演算結果であるsat(X,2)を用いて記憶情報を更新し,
以下同様の演算を繰り返し,
(Yt-2)行目のsat演算結果であるsat(X,Yt-1)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのいずれかが記憶し,
(Yt-1)行目のsat演算結果であるsat(X,Yt)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのうち,前記sat(X,Yt-1)を記憶していない方が記憶し,
前記sat演算手段は,
Yt行目の入力画像データi(X,Yt+1)と,前記キャッシュから読み出した前記sat(X,Yt)とを用いて,Yt行目のsat演算結果であるsat(X,Yt+1)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt行目のsat演算結果であるsat(X,Yt+1)を記憶し,
前記sat演算手段は,
Yt+1行目の入力画像データi(X,Yt+2)と,前記第2のキャッシュのフロントキャッシュから読み出した前記sat(X,Yt+1)とを用いて,Yt+1行目のsat演算結果であるsat(X,Yt+2)を求め,
前記第2のキャッシュのバックキャッシュは,
前記Yt+1行目のsat演算結果であるsat(X,Yt+2)を記憶し,
前記sat演算手段は,
Yt+2行目の入力画像データi(X,Yt+3)と,前記第2のキャッシュのバックキャッシュから読み出した前記sat(X,Yt+2)とを用いて,Yt+2行目のsat演算結果であるsat(X,Yt+3)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt+2行目のsat演算結果であるsat(X,Yt+3)を用いて記録情報を更新し,
以下同様の演算を繰り返し,
(Yb-2)行目のsat演算結果であるsat(X,Yb-1)を,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのいずれかが記憶し,
(Yb-1)行目のsat演算結果であるsat(X,Yb)のうち少なくともX=0からX=Xlを,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのうち,前記sat(X,Yb-1)を記憶していない方が記憶する,
システム。
sat(X,Y)=w(X,Y)×i(X,Y)+sat(X-1,Y)+sat(X,Y-1)-sat(X-1,Y-1)・・・・式Ia
ただし,式Iaにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
IBlur(x,y)=Sat(x,y)/w×h ・・・・式III
IBlur(x,y)=Sat(x,y)/W ・・・・式IIIa
- 請求項5に記載のシステムであって,
前記係数決定部11は,
入力画像データi(X,Y)の奥行きz(X,Y)又は深さd(X,Y)に関する情報が入力される入力手段と,
第1の閾値A及び第2の閾値Bが入力される閾値入力手段と,
前記奥行きz(X,Y)又は深さd(X,Y)と前記第1の閾値A及び第2の閾値Bとを比較する比較演算手段と,
前記比較演算手段の比較結果に応じて,係数w(X,Y)を決定する係数決定手段とを有し,
前記係数決定手段は,前記奥行きz(X,Y)又は深さd(X,Y)が前記第1の閾値A以上の場合は,w(X,Y)を0とし,
前記奥行きz(X,Y)又は深さd(X,Y)が前記第2の閾値B以下の場合は,w(X,Y)を1とし,
前記奥行きz(X,Y)又は深さd(X,Y)が前記第2の閾値Bより大きく前記第1の閾値Aより小さい場合は,w(X,Y)を0から1の間の値とするように演算処理を行う,
システム。
- 請求項5に記載のシステムであって,
前記係数決定部11は,
入力画像データi(X,Y),及びコンピュータブラフィックスにおける対象物の識別番号が入力されるとともに,ぼかし画像処理の対象となる対象物に関する情報が入力される対象物入力手段と,
前記ぼかし画像処理の対象となる対象物に関する情報,前記対象物の識別番号及び前記入力画像データi(X,Y)を用いて,ぼかし画像処理の対象となる対象物を把握し,このぼかし画像対象となる対象物に関するぼかし係数w(X,Y)を1とする係数決定手段とを有する,
システム。
- 請求項5に記載のシステムであって,
前記係数決定部11は,
入力画像データi(X,Y),及びコンピュータブラフィックスにおける対象物の識別番号が入力されるとともに,ぼかし画像処理の対象となる対象物に関する情報が入力される対象物入力手段と,
前記対象物入力手段により特定された対象物の画素データ部分についてマスク領域を得るマスク領域取得手段と,
前記マスク領域取得手段が取得したマスク領域の新たな画像データを得るマスク領域の画像データ取得手段と,
前記マスク領域の係数w(x,y)を1と決定する係数決定手段と,
を有し,
前記ぼかし画像取得部12は,
前記係数決定部11が決定したマスク領域に関して,
前記係数w(x,y)=1を受け取るとともに,前記新たな画像データを入力画像データとして受け取り,
前記対象物のマスクのぼかし画像を得る,
システム。
- ぼかし画像データを得る画素の座標を(x,y)とし,
前記座標(X,Y)における入力画像データをi(X,Y)とし,
前記座標(x,y)におけるぼかし画像データをIBlur(x,y)とし,
前記座標(x,y)から,x座標が所定値異なり,y座標が所定値異なる点を,(Xr,Yt),(Xr,Yb),(Xl,Yt)及び(Xl,Yb)とし,
l(エル)>rとし,b>tとし,
ある点(X,Y)について,以下の式Iを満たす値をsat(X,Y)とし,
以下の式IIを満たす値をSat(x,y)とし,
Xl-Xrをw,Yb-Ytをhとし,
前記IBlur(x,y)を,以下の式IIIで表される値としたときに,
コンピュータを,
前記入力画像データi(X,Y)を受け取るための入力画像データ入力手段と,
前記入力画像データ入力手段が受け取った入力画像データi(X,Y)を用いて式Iで示されるsatの値を求めるためのsat演算手段と,
前記sat演算手段が求めた行ごとのsat演算結果を記憶するためのキャッシュと,
前記キャッシュから,sat(Xr,Yt),sat(Xr,Yb),sat(Xl,Yt)及びsat(Xl,Yb)の値を読み出して,Sat(x,y)の値を求めるためのSat演算手段と,
前記Sat演算手段が求めたSat(x,y)の値,前記w及び前記hを用いて,IBlur(x,y)を用いるぼかし画像データ取得手段と,
を有する,
コンピュータグラフィックス用のぼかし画像データを得るためのシステムであって,
前記キャッシュは,第1のキャッシュ及び第2のキャッシュを含み,
前記第1のキャッシュは,フロントキャッシュ及びバックキャッシュを含み,
前記第2のキャッシュは,フロントキャッシュ及びバックキャッシュを含み,
前記sat演算手段は,
1行目の入力画像データi(X,0)を用いて,1行目のsat値であるsat(X,0)を求め,
前記第1のキャッシュのフロントキャッシュは,
前記1行目のsat演算結果であるsat(X,0)を記憶し,
前記sat演算手段は,
2行目の入力画像データi(X,1)と,前記第1のキャッシュのフロントキャッシュが記憶した1行目のsat演算結果であるsat(X,0)を用いて,2行目のsat値であるsat(X,1)を求め,
前記第1のキャッシュのバックキャッシュは,
前記2行目のsat演算結果であるsat(X,1)を記憶し,
前記sat演算手段は,
3行目の入力画像データi(X,2)と,前記第1のキャッシュのバックキャッシュが記憶した2行目のsat演算結果であるsat(X,1)を用いて,3行目のsat値であるsat(X,3)を求め,
前記第1のキャッシュのフロントキャッシュは,3行目のsat演算結果であるsat(X,2)を用いて記憶情報を更新し,
以下同様の演算を繰り返し,
(Yt-2)行目のsat演算結果であるsat(X,Yt-1)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのいずれかが記憶し,
(Yt-1)行目のsat演算結果であるsat(X,Yt)を,前記第1のキャッシュのフロントキャッシュ及び前記第1のキャッシュのバックキャッシュのうち,前記sat(X,Yt-1)を記憶していない方が記憶し,
前記sat演算手段は,
Yt行目の入力画像データi(X,Yt+1)と,前記キャッシュから読み出した前記sat(X,Yt)とを用いて,Yt行目のsat演算結果であるsat(X,Yt+1)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt行目のsat演算結果であるsat(X,Yt+1)を記憶し,
前記sat演算手段は,
Yt+1行目の入力画像データi(X,Yt+2)と,前記第2のキャッシュのフロントキャッシュから読み出した前記sat(X,Yt+1)とを用いて,Yt+1行目のsat演算結果であるsat(X,Yt+2)を求め,
前記第2のキャッシュのバックキャッシュは,
前記Yt+1行目のsat演算結果であるsat(X,Yt+2)を記憶し,
前記sat演算手段は,
Yt+2行目の入力画像データi(X,Yt+3)と,前記第2のキャッシュのバックキャッシュから読み出した前記sat(X,Yt+2)とを用いて,Yt+2行目のsat演算結果であるsat(X,Yt+3)を求め,
前記第2のキャッシュのフロントキャッシュは,
前記Yt+2行目のsat演算結果であるsat(X,Yt+3)を用いて記録情報を更新し,
以下同様の演算を繰り返し,
(Yb-2)行目のsat演算結果であるsat(X,Yb-1)を,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのいずれかが記憶し,
(Yb-1)行目のsat演算結果であるsat(X,Yb)のうち少なくともX=0からX=Xlを,前記第2のキャッシュのフロントキャッシュ及び前記第2のキャッシュのバックキャッシュのうち,前記sat(X,Yb-1)を記憶していない方が記憶する,
システムとして機能させるためのプログラム。
sat(X,Y)=i(X,Y)+sat(X-1,Y)+sat(X,Y-1)-sat(X-1,Y-1)・・・・式I
ただし,式Iにおいて,X-1が-1である場合は,sat(X-1,Y),及びsat(X-1,Y-1)を0とし,Y-1が-1である場合は,sat(X,Y-1)及びsat(X-1,Y-1)を0とする。
Sat(x,y)=sat(Xr,Yt)-sat(Xr,Yb)-sat(Xl,Yt)+sat(Xl,Yb)・・・・式II
IBlur(x,y)=Sat(x,y)/w×h ・・・・式III
- 請求項9に記載のプログラムを記憶したコンピュータにより読み取り可能な情報記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/637,020 US8970614B2 (en) | 2010-03-26 | 2011-03-25 | Apparatus and a method for obtaining a blur image |
JP2012506859A JP5689871B2 (ja) | 2010-03-26 | 2011-03-25 | ぼかし画像取得装置及び方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010071461 | 2010-03-26 | ||
JP2010-071461 | 2010-03-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011118224A1 true WO2011118224A1 (ja) | 2011-09-29 |
Family
ID=44672802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/001755 WO2011118224A1 (ja) | 2010-03-26 | 2011-03-25 | ぼかし画像取得装置及び方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8970614B2 (ja) |
JP (1) | JP5689871B2 (ja) |
WO (1) | WO2011118224A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004185611A (ja) * | 2002-11-21 | 2004-07-02 | Advanced Telecommunication Research Institute International | 顔位置の抽出方法、およびコンピュータに当該顔位置の抽出方法を実行させるためのプログラムならびに顔位置抽出装置 |
JP2005293061A (ja) * | 2004-03-31 | 2005-10-20 | Advanced Telecommunication Research Institute International | ユーザインタフェース装置およびユーザインタフェースプログラム |
JP2006072829A (ja) * | 2004-09-03 | 2006-03-16 | Fujifilm Software Co Ltd | 画像認識システム及び画像認識方法 |
JP2007028348A (ja) * | 2005-07-20 | 2007-02-01 | Noritsu Koki Co Ltd | 画像処理装置及び画像処理方法 |
JP2010009599A (ja) * | 2008-06-27 | 2010-01-14 | Palo Alto Research Center Inc | 局所化されたスケール空間特性を使用してピクチャイメージ内で安定したキーポイントを検出するシステムおよび方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6864994B1 (en) * | 2000-01-19 | 2005-03-08 | Xerox Corporation | High-speed, high-quality descreening system and method |
-
2011
- 2011-03-25 WO PCT/JP2011/001755 patent/WO2011118224A1/ja active Application Filing
- 2011-03-25 US US13/637,020 patent/US8970614B2/en active Active
- 2011-03-25 JP JP2012506859A patent/JP5689871B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004185611A (ja) * | 2002-11-21 | 2004-07-02 | Advanced Telecommunication Research Institute International | 顔位置の抽出方法、およびコンピュータに当該顔位置の抽出方法を実行させるためのプログラムならびに顔位置抽出装置 |
JP2005293061A (ja) * | 2004-03-31 | 2005-10-20 | Advanced Telecommunication Research Institute International | ユーザインタフェース装置およびユーザインタフェースプログラム |
JP2006072829A (ja) * | 2004-09-03 | 2006-03-16 | Fujifilm Software Co Ltd | 画像認識システム及び画像認識方法 |
JP2007028348A (ja) * | 2005-07-20 | 2007-02-01 | Noritsu Koki Co Ltd | 画像処理装置及び画像処理方法 |
JP2010009599A (ja) * | 2008-06-27 | 2010-01-14 | Palo Alto Research Center Inc | 局所化されたスケール空間特性を使用してピクチャイメージ内で安定したキーポイントを検出するシステムおよび方法 |
Also Published As
Publication number | Publication date |
---|---|
JP5689871B2 (ja) | 2015-03-25 |
US8970614B2 (en) | 2015-03-03 |
US20130042069A1 (en) | 2013-02-14 |
JPWO2011118224A1 (ja) | 2013-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100896155B1 (ko) | 내장형 디바이스의 플렉시블 안티에일리어싱 | |
US8379972B1 (en) | Color decontamination for image compositing | |
US8040352B2 (en) | Adaptive image interpolation for volume rendering | |
CN110580696A (zh) | 一种细节保持的多曝光图像快速融合方法 | |
CN114862722B (zh) | 一种图像亮度增强实现方法及处理终端 | |
WO2016039301A1 (ja) | 画像処理装置および画像処理方法 | |
US20080074435A1 (en) | Texture filtering apparatus, texture mapping apparatus, and method and program therefor | |
US20240127402A1 (en) | Artificial intelligence techniques for extrapolating hdr panoramas from ldr low fov images | |
JP5689871B2 (ja) | ぼかし画像取得装置及び方法 | |
JP7131080B2 (ja) | ボリュームレンダリング装置 | |
CN116228517A (zh) | 一种深度图像处理方法、系统、设备以及存储介质 | |
US7859531B2 (en) | Method and apparatus for three-dimensional graphics, and computer product | |
JP5178933B1 (ja) | 画像処理装置 | |
CN113240588A (zh) | 一种基于增强大气散射模型的图像去雾和曝光方法 | |
JP2973432B2 (ja) | 画像処理方法および装置 | |
JPH0822556A (ja) | テクスチャマッピング装置 | |
JP3587105B2 (ja) | 図形データ処理装置 | |
US6738064B2 (en) | Image processing device and method, and program therefor | |
CN117132470A (zh) | 超分辨率图像的重建方法、设备和存储介质 | |
CN117911296A (zh) | 用于从低ldr的fov图像推断hdr全景图的人工智能技术 | |
JP4696669B2 (ja) | 画像調整方法及び画像調整装置 | |
GB2624103A (en) | Artificial intelligence techniques for extrapolating HDR panoramas from LDR low FOV images | |
CN118505889A (zh) | 角色自阴影效果的实现方法、装置、电子设备及存储介质 | |
CN117896510A (zh) | 图像处理方法、装置、电子设备及可读存储介质 | |
JP3438921B2 (ja) | 動画像生成装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11759033 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012506859 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13637020 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11759033 Country of ref document: EP Kind code of ref document: A1 |