WO2018105228A1 - Image processing device, display device, image processing device control method, and control program - Google Patents
Image processing device, display device, image processing device control method, and control program Download PDFInfo
- Publication number
- WO2018105228A1 WO2018105228A1 PCT/JP2017/036609 JP2017036609W WO2018105228A1 WO 2018105228 A1 WO2018105228 A1 WO 2018105228A1 JP 2017036609 W JP2017036609 W JP 2017036609W WO 2018105228 A1 WO2018105228 A1 WO 2018105228A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- image
- sky
- region
- divided
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 227
- 238000012545 processing Methods 0.000 title claims description 204
- 230000008569 process Effects 0.000 claims abstract description 196
- 238000012805 post-processing Methods 0.000 claims description 25
- 239000003086 colorant Substances 0.000 claims description 23
- 230000003247 decreasing effect Effects 0.000 claims description 6
- 101150013335 img1 gene Proteins 0.000 description 113
- 238000001514 detection method Methods 0.000 description 50
- 238000010586 diagram Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 17
- 230000004048 modification Effects 0.000 description 17
- 238000012986 modification Methods 0.000 description 17
- 235000012736 patent blue V Nutrition 0.000 description 17
- 230000000694 effects Effects 0.000 description 13
- 230000007423 decrease Effects 0.000 description 11
- 238000003708 edge detection Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 238000009792 diffusion process Methods 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/409—Edge or detail enhancement; Noise or error suppression
- H04N1/4092—Edge or detail enhancement
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/62—Retouching, i.e. modification of isolated colours only or in isolated picture areas only
- H04N1/628—Memory colours, e.g. skin or sky
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0686—Adjustment of display parameters with two or more screen areas displaying information with different brightness or colours
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
Definitions
- the following disclosure relates to an image processing apparatus that processes an image.
- Patent Document 1 discloses a display device (image display device) for displaying an object (object) on a screen so as to obtain a stereoscopic effect close to the actual appearance in the natural world. Has been.
- the display device of Patent Document 1 is configured for the purpose of reproducing the appearance of such an object. More specifically, the display device of Patent Document 1 includes image processing means for performing diffusion processing (processing processing) according to the distance from the gazing point to the position of the target object on the image information representing the target object. And image reproducing means for generating pixel information representing each pixel to be displayed on the display screen based on the image information subjected to the diffusion processing.
- image processing means for performing diffusion processing (processing processing) according to the distance from the gazing point to the position of the target object on the image information representing the target object.
- image reproducing means for generating pixel information representing each pixel to be displayed on the display screen based on the image information subjected to the diffusion processing.
- Patent Document 2 discloses a technique for changing the display of the display based on the user's line of sight (how the user looks at the computer display).
- Japanese Patent Publication Japanese Patent Laid-Open No. 9-55959 (published on February 25, 1997)” Japanese Patent Publication “Special Table 2014-532206 Publication (December 4, 2014)”
- An object of one aspect of the present disclosure is to realize an image processing apparatus that generates an image close to an actual appearance in the natural world.
- an image processing apparatus includes a sky area specifying unit that specifies a sky area corresponding to the sky in a display target image, and the sky area specifying unit that specifies the sky area specifying unit. Based on the color of the sky area, when it is determined by the execution determination unit that determines whether or not to perform the blurring process for the sky area and the blur determination process by the execution determination unit, the sky area An image generation unit that generates a post-processing image by performing a blurring process.
- a control method for an image processing device includes a sky region specifying step of specifying a sky region corresponding to the sky in a display target image, and the sky region specifying step.
- An execution feasibility determination step for determining whether to perform blurring processing on the sky region based on the color of the sky region specified in step, and when it is determined that the blurring processing is performed in the execution feasibility determination step
- an image generation step of generating a post-processing image by performing a blurring process on the sky region.
- FIG. 2 is a diagram illustrating an example of a configuration of a display device according to Embodiment 1.
- FIG. 3 is a diagram illustrating an example of an image to be displayed on the display unit of the display device according to the first embodiment and a plurality of divided regions formed in the image.
- FIG. (A)-(d) is a figure for demonstrating the concept of the method of specifying an empty area
- 6 is a flowchart illustrating an example of processing in the display device according to the first embodiment. 6 is a diagram illustrating an example of a configuration of a display device according to a modification of the first embodiment.
- FIG. 10 is a diagram illustrating an example of an image to be displayed on the display unit of the display device according to the second embodiment and a plurality of divided regions formed in the image.
- FIG. 6 is a diagram illustrating an example of a configuration of a display device according to a third embodiment.
- FIG. It is a figure which shows an example of the image used as the display object of the display part of the display apparatus which concerns on Embodiment 3, and the some area
- FIG. 10 is a diagram illustrating an example of a configuration of a display device according to a fourth embodiment. It is a figure which shows an example of the image used as the display target of the display part of the display apparatus which concerns on Embodiment 4, and the several division area formed in the said image. It is a figure which shows an example of the post-process image displayed on the said display part. 14 is a flowchart illustrating an example of processing in the display device according to the fourth embodiment.
- FIG. 10 is a diagram for explaining an example of processing of a display device according to a fifth embodiment.
- FIG. 10 is a diagram for explaining an example of processing of a display device according to a fifth embodiment.
- Embodiment 1 Hereinafter, Embodiment 1 of the present disclosure will be described in detail based on FIGS. 1 to 5.
- a physical color can be defined by tristimulus values (X, Y, Z).
- the tristimulus values are obtained by integrating values calculated based on (luminance of light source) ⁇ (reflectance of object) ⁇ (human visual sensitivity) at each wavelength of about 400 nm to 800 nm. Since the color represented by these tristimulus values is an absolute color, colors that show the same stimulus value should look the same color. However, depending on the situation or environment, a human may feel a psychologically different color even if the colors show the same stimulus value.
- German psychologist David Katz classifies psychological colors based on the knowledge that the appearance of colors varies not only with tristimulus values but also with the localization or texture of the object (object).
- surface color refers to a color that can be perceived so that the localization or texture of the object surface cannot be clearly perceived, and that cannot be sensed with attributes other than the color attribute.
- a blue sky color sometimes referred to as “sky blue”.
- edges or textures are not stereotactic. That is, such an edge or texture changes its position or disappears every time a distant blue sky is reviewed, unlike a cloud, bird, airplane, or sunset sky that is clearly different from the blue sky. It is not a target that humans are clearly aware of.
- the edge or texture may be recognized as being obtained from an actually existing object. This is because the distance from the person to the display surface of the display device is much closer than the distance from the person to the distant blue sky, so that the position of the object is psychologically specified to be close to the person. Arise. As a result, in the above case, the distant blue sky may appear unnatural.
- the image processing apparatus performs a blurring process on the sky region based on the color of the sky region (eg, distant view and background). Specifically, when the color of the sky region is sky blue, the image processing apparatus performs edge processing or a texture in a region that can be recognized as a face color psychologically by performing a blurring process on the sky region. Eliminate. Thereby, a surface color can be perceived as a surface color, and a natural way of viewing the sky region can be provided.
- the color of the sky region eg, distant view and background.
- FIG. 1 is a diagram illustrating an example of the configuration of the display device 100.
- FIG. 2 is a diagram illustrating an example of an image IMG1 (display target image) to be displayed on the display unit 50 (display surface) and a plurality of divided regions A1 to A3 formed in the image IMG1.
- FIGS. 3A to 3D are diagrams for explaining the concept of a technique for identifying the sky region Bs using a graph expressing the relationship between the brightness and the number of pixels in each of the divided regions A1 to A3.
- FIG. FIG. 4 is a diagram illustrating an example of the processed image IMG2 displayed on the display unit 50.
- FIG. 2 shows an example in which an image IMG1 (input image) before processing by the control unit 10 is displayed on the display unit 50 for the sake of simplification, but actually, as described later, An image processed by the control unit 10 (for example, the processed image IMG2 shown in FIG. 4) is displayed on the display unit 50.
- This also applies to FIGS. 10, 17, and 20 to 22 after the second embodiment.
- the images IMG1 shown in these figures are the same.
- the display device 100 displays an image, and includes a control unit 10 (image processing device), a display unit 50, and a storage unit 70.
- the controller 10 will be described later.
- the display unit 50 displays an image under the control of the control unit 10, and is composed of, for example, a liquid crystal panel. In the present embodiment, the display unit 50 displays the processed image IMG2 (see FIG. 4).
- the storage unit 70 stores, for example, various control programs executed by the control unit 10, and is configured by a non-volatile storage device such as a hard disk or a flash memory.
- the storage unit 70 stores, for example, image data indicating the image IMG1.
- control unit 10 reads out image data indicating the image IMG1 as a still image stored from the storage unit 70.
- image data indicating the image IMG1 is stored in the storage unit 70, and the moving image data may be read by the control unit 10.
- the control unit 10 may perform image processing described below for each frame constituting the moving image.
- the image data or the moving image data is not necessarily stored in the storage unit 70 in advance, and may be acquired by receiving a broadcast wave, or may be acquired by receiving image data or a moving image connected to the display device 100. You may acquire from the external apparatus (for example, recording device) which stores or produces
- the external apparatus for example, recording device
- examples of the display device 100 include a personal information computer such as a PC (Personal Computer), a multi-function mobile phone (smart phone), a tablet, or a television.
- a personal information computer such as a PC (Personal Computer), a multi-function mobile phone (smart phone), a tablet, or a television.
- the control unit 10 controls the display device 100 in an integrated manner. Particularly in the present embodiment, the control unit 10 has an image processing function for performing a predetermined process on the image IMG1 shown in FIG. 2.
- the relationship specifying unit 11 determines the brightness of each of the plurality of pixels constituting the image IMG1 and the brightness. The relationship with the number of pixels having
- the relationship specifying unit 11 reads the image data stored in the storage unit 70, and specifies the vertical direction (Y-axis direction in FIG. 2) of the image IMG1 indicated by the image data. Then, as shown in FIG. 2, the image IMG1 is divided into a plurality of divided regions A1 to A3 in accordance with the vertical direction. That is, a plurality of divided areas A1 to A3 are formed (set) for the image IMG1.
- the width h (length in the vertical direction) of the plurality of divided regions A1 to A3 is set so as to divide the width (length) H in the Y-axis direction of the image IMG1 into approximately three equal parts. . That is, it is set as h ⁇ H / 3.
- the width H can be appropriately adjusted in accordance with effective execution of actual image processing.
- the lower part ( ⁇ Y axis direction) of the display unit 50 corresponds to the foreground (front side of the landscape), and the upper side (+ Y axis direction) corresponds to the distant view (of the landscape). Often corresponds to the back side. Therefore, by dividing the image IMG1 in the vertical direction of the image IMG1 to form a plurality of divided regions A1 to A3, the image generation unit 14 considers the perspective of the image (the three-dimensional depth of the landscape). Thus, it is possible to generate the processed image IMG2 that is close to the actual appearance in the natural world.
- the width and number of the plurality of divided areas can be arbitrarily set.
- the width of at least a part of the plurality of divided regions may be different from the width of other divided regions. Further, the number of the plurality of divided regions may be two, or may be four or more.
- the relationship specifying unit 11 Based on the gradation values (for example, three gradation values of a red gradation value, a green gradation value, and a blue gradation value) possessed by each of a plurality of pixels constituting the image IMG1, the relationship specifying unit 11 The brightness of the pixel is calculated, and the number of pixels having the brightness is counted. Then, the relationship specifying unit 11 specifies the relationship between the calculated brightness and the number of counted pixels for each of the divided regions A1 to A3. The relationship specifying unit 11 may express the relationship as a graph (brightness distribution) in which the horizontal axis represents lightness and the vertical axis represents appearance frequency (number of pixels), for example.
- a graph (brightness distribution) in which the horizontal axis represents lightness and the vertical axis represents appearance frequency (number of pixels), for example.
- the graph expressing the above relationship is, for example, a graph as shown in (b) to (d) of FIG.
- the relationship identifying unit 11 associates the identified relationship with each of the divided regions A1 to A3, and transmits the relationship to the empty region identifying unit 12 as specific relationship data.
- the relationship specifying unit 11 can also function as part of the image processing function. In this case, the process can be simplified.
- the empty area specifying unit 12 specifies an empty area Bs corresponding to the sky in the image IMG1.
- the sky region specifying unit 12 is configured such that the number of pixels indicated by the peak in the graph expressing the relationship for each of the plurality of divided regions A1 to A3 specified by the relationship specifying unit 11 is the divided region in the image IMG1.
- the area formed by the pixels corresponding to the peak is specified as the distant view area.
- the sky area specifying unit 12 determines that the distant view area is included in the image IMG1, and specifies the distant view area as the empty area Bs.
- the graphs shown in (a) to (d) of FIG. 3 are graphs showing the above-described relationships in a given display target image with the horizontal axis representing the brightness and the vertical axis representing the appearance frequency.
- the value on the vertical axis is given for convenience, and the value has no substantial meaning.
- the value on the horizontal axis is a value obtained by normalizing the brightness.
- an arbitrary display target image includes an area corresponding to the sky (empty area Bs in FIG. 2) and an object group other than the area. For ease of explanation, the area corresponding to the sky will be described as the empty area Bs in the description of FIG.
- the graph showing the above relationship in the empty region Bs tends to be narrow.
- the graph showing the above relationship in the sky region Bs is a narrow graph having a peak when the lightness value is around 0.8.
- the graph showing the relationship in the object group tends to be broad.
- the graph showing the above relationship in the object group is a broad graph having a peak when the value of brightness is around 0.5.
- the graph in the object group may have a plurality of peaks.
- a graph (a histogram of the display target image and a brightness distribution of the display target image) as shown in FIGS. 3B to 3D is obtained.
- the graphs showing the above relationships shown in (b) to (d) of FIG. 3 prepare three display target images having different ratios of the empty regions Bs, and each of the three display target images shown in (a) of FIG. This is a result of generating two graphs and synthesizing the two graphs.
- the ratio of the empty region Bs in the display target image decreases in the order of (b) to (d) in FIG.
- the number of pixels indicated by the peak of the graph showing the above relationship and pixels having brightness near the peak decreases as the ratio of the sky region Bs decreases. I understand that.
- the ratio of the distant view area to the display target image is larger on the upper side of the display target image than on the lower side. Accordingly, the transition of the number of pixels indicated by the peak as shown in (b) to (d) of FIG. 3 (which may include the number of pixels having brightness near the peak) is changed from the upper side of the display target image to the lower side.
- a distant view area in the display target image that is, whether or not the distant view area as the sky area Bs is included in the image IMG1.
- the sky region specifying unit 12 includes the number of pixels indicated by the peak in the graph expressing the above relationships of the divided regions A1 to A3 specified by the relationship specifying unit 11 (may include the number of pixels having brightness near the peak). Is identified. Then, when the number of the pixels in the divided regions A1 to A3 decreases in that order, the sky region specifying unit 12 specifies the region formed by the pixels having the lightness near the peak and the peak as the distant view region. Then, it is determined that the image IMG1 includes a distant view area as the sky area Bs. That is, the sky area specifying unit 12 specifies the pixels having the peak and the brightness near the peak as pixels constituting the distant view area.
- the sky region specifying unit 12 selects pixels having a lightness within a predetermined range (for example, a range of ⁇ 0.1 in lightness after normalization) from the lightness corresponding to the peak (maximum value of the number of pixels). , It is specified as a pixel constituting a distant view area.
- a predetermined range for example, a range of ⁇ 0.1 in lightness after normalization
- the sky region specifying unit 12 adds an empty region to the image IMG1 if at least one of the plurality of peaks decreases as described above. It is determined that Bs is included.
- the empty region specifying unit 12 transmits pixel data indicating pixels constituting the empty region Bs to the execution determination unit 13.
- the image IMG1 itself does not have information such as what the object included in the image IMG1 is and how large the object is. Therefore, in a general display device, the above information is acquired based on a database search or analysis of metadata accompanying the image IMG1.
- the display device 100 only needs to be able to specify the empty region Bs as the target of the blurring process.
- the display device 100 can specify the distant view area on the assumption that the distant view area is the sky area Bs. Therefore, in the display device 100, the sky region specifying unit 12 can specify the distant view region by a simple method that only uses the relationship specified by the relationship specifying unit 11 (a graph expressing the relationship). That is, the perspective determination in the image IMG1 can be performed by a simple method. The same can be said for the processing of Modification 1 of the present embodiment described later.
- the executability determination unit 13 determines whether or not to perform the blurring process on the sky region Bs based on the color of the sky region Bs specified by the sky region specifying unit 12.
- the feasibility determination unit 13 includes a specific color in the color of the pixel specified by the sky region specifying unit 12 and the number of pixels having the specific color in the distant view region is a predetermined number or more. In this case, it is determined that the blurring process is performed on the sky region Bs.
- the specific color refers to a color in which the color of the sky region Bs can be determined as a surface color, such as a color of the sky region Bs of blue sky (referred to as sky blue in the present embodiment).
- the predetermined number refers to the number of pixels that can be determined that the sky region Bs has a sky spread. In other words, when the pixel of the sky region Bs having a size that can be defined as sky indicates a sky blue color, the feasibility determination unit 13 determines that the sky region Bs is a surface color and blurs the sky region Bs. Is determined to be performed.
- the execution determination unit 13 determines whether or not the color is sky blue using, for example, a 24-color ring of PCCS (Practical Color Co-ordinate System).
- PCCS Practical Color Co-ordinate System
- the executability determination unit 13 determines that the color of the pixel is a sky blue.
- the specific color is 15: BG, 16: gB or 17: B.
- the specific color is not limited to the above, and a color that can determine the color of the sky region Bs as the surface color may be selected as the specific color. Further, there is no need to set a specific color using the 24-color circle of PCCS, and for example, a specific color may be selected from a hue group divided into 6, 12, or 20 types.
- the execution determination unit 13 also includes an empty area Bs1 included in the divided area A1 (upper divided area) among the divided areas A1 to A3, and a divided area A2 (lower divided area) adjacent to the divided area A1. ) Included in the empty area Bs2 included in the divided area A1 may be subject to blurring processing. That is, in this case, even if it is determined that the blurring process is once performed on the sky area Bs2, the execution possibility determination unit 13 determines that the blurring process is not performed on the sky area Bs2.
- the sky region Bs is composed of sky regions Bs1 and Bs2.
- the execution determination unit 13 determines whether or not the empty region Bs is continuous at the boundary between the divided region A1 and the divided region A2 in the image IMG1.
- the executability determination unit 13 specifies pixels near the boundary in the empty region Bs1 included in the divided region A1, and specifies pixels near the boundary in the empty region Bs2 included in the divided region A2.
- the feasibility determination unit 13 identifies an area where these pixels are adjacent at the boundary as a contact area, and the number of adjacent pixels within the contact area is greater than or equal to a predetermined number, or all of the pixels near the boundary. It is determined whether or not there is a predetermined ratio or more with respect to the number.
- the feasibility determination unit 13 determines that the empty region Bs2 is continuous with the empty region Bs1, and blurs the empty region Bs2 as well. Target of processing.
- the number is less than the predetermined number or the predetermined ratio
- the dotted framed area is an area specified as the contact area.
- the sky region Bs may actually be an object different from the sky. In this case, since the target object can clearly visually recognize the actual state, it is not appropriate to perform the blurring process on the target object.
- the upper side of the image IMG1 is often a distant view area. Therefore, by determining whether or not the empty area Bs is continuous from the upper (upper) divided area toward the lower (lower) divided area, the empty area Bs2 included in the lower divided area is also determined. It can be determined whether or not the sky region Bs1 indicates sky. In other words, if it is determined that it is not continuous, it can be determined that the sky region Bs2 actually indicates an object other than the sky, and the blur processing is not performed on the object. be able to. As a result, it is possible to prevent the above-described inappropriate blurring process from being performed.
- the predetermined amount may be set to a value that can be determined to be the continuous.
- the predetermined amount can be set to 20%, for example.
- the number of pixel pairs in which the upper and lower pixels across the boundary are both sky blue is 20% of the display horizontal pixel number (the total number of pixels aligned in the horizontal direction (X-axis direction) of the image IMG1).
- the empty regions Bs1 and Bs2 are continuous.
- the upper divided area is the divided area A1, and the lower divided area is the divided area A2.
- the upper divided area is the divided area A2, and the lower divided area is the divided area A3. The same can be explained.
- the execution availability determination unit 13 transmits to the image generation unit 14 the determination result as to whether or not to perform the blurring process on the sky region Bs.
- the image generation unit 14 generates the post-processing image IMG2 by performing the blurring process on the sky region Bs when the execution determination unit 13 determines to perform the blurring process. That is, the image generation unit 14 performs a process of generating the post-processing image IMG2 illustrated in FIG. Then, the image generation unit 14 displays the processed image IMG2 on the display unit 50 by transmitting image data indicating the generated processed image IMG2 to the display unit 50.
- the blurring process is realized by a known method using, for example, a filter (for example, a low-pass filter).
- a filter for example, a low-pass filter
- the process which applies the low-pass filter comprised from a 3x3 matrix (average matrix) is mentioned.
- a 3 ⁇ 3 matrix may have a central value of 0 and a value of 1/8 in other regions, or a central value of 1/2 and a value of 1/16 in other regions.
- the present invention is not limited to this, and various low-pass filters including a simple average matrix can be applied to the blurring process. That is, it is only necessary that the blurring process is performed to such an extent that a clear edge and texture cannot be visually recognized in the sky region Bs when the display device 100 displays the processed image IMG2.
- the matrix size of the low-pass filter may be selected appropriately in consideration of the application of the display device 100, the circuit scale of the display device 100, or the resolution of the display unit 50.
- the matrix size is about 5 ⁇ 5 to 15 ⁇ 15. It may be the size of According to the study by the present inventors, by applying a low-pass filter of about 5 ⁇ 5 for high vision, about 9 ⁇ 9 for 4K, and about 13 ⁇ 13 for 8K, the sky region Bs is applied. Has been confirmed to approach the actual appearance (natural appearance).
- the application of the low-pass filter in the present embodiment is not for noise removal in the sky region Bs but for clearly blurring (blurring) the sky region Bs.
- blurring can be performed with a 3 ⁇ 3 low-pass filter, it is preferable to apply a low-pass filter having a size larger than 3 ⁇ 3 as described above in order to achieve the purpose of blurring more clearly. .
- the image generation unit 14 performs blurring processing on the entire sky region Bs.
- the edge of the object may also be blurred.
- the object actually observed in the sky can be clearly seen because it has both edges and texture, but its position (feeling of distance) is familiar with the sky (background) and its clarity is Decrease.
- By blurring the edges of the object as described above it is possible to approximate the appearance of the actual object in the processed image IMG2.
- the present inventors have confirmed that when the edge of the object is emphasized, the object appears to be raised from the display surface, and the natural appearance of the sky is impaired.
- the post-processing image IMG2 generated by the image generation unit 14 from the image IMG1 shown in FIG. 2 is as shown in FIG. Note that the color of the sky region Bs in FIG. 2 is sky blue in most of the regions, and the contact region of the sky region Bs in the divided regions A1 and A2 is a predetermined amount or more.
- the blurring process is performed on the empty area Bs. Also, blurring processing is performed on (i) the edge between the sky region Bs and the object Obj1 (tree), and (ii) the edge between the sky region Bs and the object Obj3 (mountain).
- the region other than the above (i) and (ii) (including a part of the objects Obj1 and Obj3, the object Obj2 (house), and the object Obj4 (flower)) remains the image IMG1. That is, the image IMG1 is subjected to a blurring process so that the edge and texture cannot be visually recognized in the area determined to be the face color, and the edge and texture are held for the area determined not to be the face color. Further, the textures are held inside the objects Obj1 and Obj3.
- FIG. 5 is a flowchart illustrating an example of processing in the display device 100.
- the relationship specifying unit 11 reads image data from the storage unit 70 (S1), and divides the image IMG1 into a plurality of divided regions A1 to A3 in accordance with the vertical direction of the image IMG1.
- the relationship specifying unit 11 specifies the relationship in each of the divided areas A1 to A3 (S2).
- the number of pixels indicating the peak of the graph expressing the above relationship in each of the divided regions A1 to A3 specified by the relationship specifying unit 11 decreases from the divided region A1 toward the divided region A3.
- an area formed by the pixels corresponding to the peak is specified as a distant view area.
- the sky region specifying unit 12 determines that the distant view region is included in the image IMG1, and specifies the distant view region as the sky region Bs (S3; sky region specifying step).
- the sky region specifying unit 12 specifies the pixel corresponding to the peak as a pixel constituting the sky region Bs (S4), and determines whether or not the pixel is sky blue and exists in a predetermined number or more (S5). ; Executability determination step).
- the execution determination unit 13 determines whether or not the empty areas Bs in the divided areas A1 and A2 are continuous (S6; execution determination process). Specifically, the feasibility determination unit 13 determines whether or not the contact areas in the divided areas A1 and A2 exist in a predetermined amount or more. On the other hand, if NO in S5, the image generation unit 14 displays the image IMG1 on the display unit 50 without performing the blurring process on the image IMG1 (S8).
- the feasibility determination unit 13 determines to perform the blurring process on the entire empty area Bs.
- the image generation unit 14 generates a processed image IMG2 by performing blurring processing on the entire sky region Bs (S7; image generation step).
- the image generation unit 14 displays the processed image IMG2 on the display unit 50 (S8).
- the feasibility determination unit 13 determines that the blurring process is performed only on the empty area Bs1 included in the divided area A1 in the empty area Bs.
- the image generation unit 14 performs the blurring process only on the sky region Bs1, and generates a processed image IMG2 (an image different from the processed image IMG2 illustrated in the example of FIG. 4) (S9). Image generation step).
- the image generation unit 14 displays the processed image IMG2 on the display unit 50 (S8).
- FIG. 6 is a diagram illustrating an example of the configuration of the display device 100a.
- the display device 100a includes a control unit 10a (image processing device) that comprehensively controls the display device 100a.
- the control unit 10a has an image processing function for performing predetermined processing on the image IMG1 shown in FIG. 2, and includes a pixel counting unit 21, a sky region specifying unit 12a, an execution availability determination unit 13a, and an image generation unit 14. Is provided. That is, the control unit 10 a is different from the control unit 10 mainly in that a pixel counting unit 21 is provided instead of the relationship specifying unit 11. This means that the method of specifying the empty area Bs is different from the method of the control unit 10.
- the pixel counting unit 21 classifies each of the plurality of pixels constituting the image IMG1 into one of a plurality of predetermined colors for each of the divided regions A1 to A3.
- the plurality of predetermined colors for example, 24 colors in the 24-color ring of PCCS can be used.
- the plurality of predetermined colors is not limited to 24 colors.
- the pixel counting unit 21 reads the image data stored in the storage unit 70 and, like the relationship specifying unit 11, the image IMG1 is divided into a plurality of divided regions A1 to A3 in accordance with the vertical direction as shown in FIG. To divide. In addition, the pixel counting unit 21 calculates the color of each pixel based on the gradation value that each of the plurality of pixels constituting the image IMG1 has. The pixel counting unit 21 classifies the calculated color of each pixel into one of a plurality of predetermined colors for each of the divided regions A1 to A3. The pixel counting unit 21 counts the number of classified pixels (appearance frequency) for each predetermined color. The pixel counting unit 21 transmits pixel number data indicating the number of pixels associated with each predetermined color to the sky region specifying unit 12a.
- the empty area specifying unit 12a specifies the empty area Bs in the image IMG1 similarly to the empty area specifying unit 12.
- the sky area specifying unit 12a moves from the divided area A1 (the divided area located on the upper side) to the divided area A3 (the divided area located on the lower side) for any one of the predetermined colors.
- an area formed by the pixels having the reduced number is specified as a distant view area.
- the sky area specifying unit 12a determines that the image IMG1 includes a distant view area as the sky area Bs.
- the sky area specifying unit 12a determines that the image IMG1 includes the sky area Bs, the sky area specifying unit 12a indicates a predetermined color in which the number of pixels is reduced (a predetermined color corresponding to the pixels constituting the distant view area). The color data is transmitted to the execution determination unit 13a.
- the execution availability determination unit 13a determines whether to perform the blurring process on the sky region Bs based on the color of the sky region Bs.
- the feasibility determination unit 13a identifies that the pixel indicating the predetermined color constitutes the sky region Bs, and It is determined that the blurring process is performed on the sky region Bs including the pixel. Whether or not the predetermined color is sky blue is determined based on whether or not the predetermined color is, for example, 15: BG, 16: gB, or 17: B in the 24-color circle, as described above. Then, the feasibility determination unit 13a transmits to the image generation unit 14 a determination result as to whether or not to perform the blurring process on the sky region Bs.
- the color indicating the color of the empty area Bs (location that can be the target of the determination process) and the coefficient thereof necessary for the determination process of the execution determination unit 13a is already evaluated in the empty area specifying unit 12a. It can be said that. Therefore, the feasibility determining unit 13a can determine the sky region Bs indicating the sky color that can be specified by the sky region specifying unit 12a as the executable region of the blurring process.
- the color of the pixel is evaluated in specifying the sky region Bs. Therefore, unlike the display device 100, it is not necessary to perform a two-step step of evaluating the color of the pixels in the sky region Bs after specifying the sky region Bs based on the above relationship. Therefore, the display device 100a can perform processing more easily than the display device 100.
- FIG. 7 is a flowchart illustrating an example of processing in the display device 100a.
- the pixel counting unit 21 reads image data from the storage unit 70 (S11), and divides the image IMG1 into a plurality of divided regions A1 to A3.
- the pixel counting unit 21 classifies the calculated color of each pixel into one of a plurality of predetermined colors for each of the divided areas A1 to A3, and counts the number of pixels for each predetermined color (S12).
- the sky area specifying unit 12a is formed by pixels having a reduced number when the number of pixels decreases from the divided area A1 toward the divided area A3 in any of a plurality of predetermined colors.
- the area is specified as a distant view area. That is, the sky area specifying unit 12a determines that the image IMG1 includes a distant view area, and specifies the distant view area as the empty area Bs (S13; sky area specifying step).
- the execution determination unit 13a determines whether or not the predetermined color in which the number of pixels is decreasing is a sky blue (S14; execution determination step). If YES in S14, the process proceeds to S6. On the other hand, in the case of NO in S14, the image generation unit 14 displays the image IMG1 on the display unit 50 without performing the blurring process on the image IMG1 (S8).
- the control unit 10 or 10a When the number of pixels indicated by the peak of the graph expressing the above relationship or the number of pixels of a predetermined color is reduced in each of the divided regions A1 to A3, the control unit 10 or 10a is a region including the pixel. Is specified as a distant view area, but is not limited thereto. For example, the controller 10 or 10a tends to decrease the number of pixels as a whole in the divided regions A1 to A3, but even if there is a portion where the number of pixels is constant in some of the pixels, You may identify the area
- the area including the pixel is defined as a distant view area.
- the divided areas A1 and A2 are constant and the number of pixels in the divided areas A2 and A3 is reduced, the divided areas A2 and A3 correspond to the set of divided areas.
- the empty area Bs of the divided areas can be the target of the blurring process.
- the divided areas are set finely as in the second embodiment, it can be assumed that the number of pixels in the adjacent divided areas can be constant, but even in such a case, the distant view area can be accurately identified. It becomes possible.
- the blurring process can be performed on the sky region Bs even when the entire image IMG1 is a blue sky.
- the control unit 10 or 10a may detect an edge in the image IMG1. In this case, the control unit 10 or 10a detects the edge after acquiring the image data from the storage unit 70. Thereby, the objects Obj1 to Obj4 (for example, the contours of the objects Obj1 to Obj4) in the image IMG1 are detected.
- the texture of the objects Obj1 and Obj3 having the boundary with the sky region Bs may be reduced, or the objects Obj1 and Obj3
- the edges may become sharper.
- the sky region Bs can be blurred, and only the edge of the objects Obj1 and Obj3 or the vicinity of the edge can be surely blurred. Therefore, the above possibility can be reduced.
- edge detection is not essential. That is, as described above, it is not always necessary to detect the objects Obj1 to Obj4 by detecting the edge.
- control unit 10 or 10a combines the image obtained by performing the blurring process on the sky region Bs and the image obtained by performing another image processing on the image IMG1, thereby obtaining the processed image IMG2. It may be generated. However, when different processing is performed on the pixels constituting the two images, the blurring processing for the sky region Bs is preferentially applied.
- Examples of the other image processing include enhancement processing or blurring processing performed according to a predetermined condition.
- An example of performing enhancement processing or blurring processing according to such a predetermined condition will be described in the third and subsequent embodiments.
- the display device 100 has been described as including the control unit 10 and the display unit 50.
- the display device 100 is not limited thereto, and, for example, an external device (image) that can be communicably connected to the display device having the display unit 50.
- the processing apparatus may have the image processing function of the control unit 10. The same applies to the display device 100a.
- the display device 100 or 100a determines whether or not to perform the blurring process based on the color of the sky area Bs, and performs the blurring process on the sky area Bs when it is determined to perform the blurring process.
- the color of the sky region Bs to be subjected to the blurring process is a color that can be recognized as a surface color (for example, blue sky). For this reason, with respect to the sky region Bs recognized as the surface color, the visibility as the surface color can be improved by removing the edge or texture, and can be brought close to the actual sky appearance in the natural world.
- FIG. 8 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions B1 to B6 formed in the image IMG1.
- the width h of the plurality of divided regions A1 to A3 is set so that the width H of the image IMG1 is approximately equal to three.
- the divided regions B1 to B4 have different widths.
- the divided regions B1 to B4 (upper side) are larger than the width hb (length in the vertical direction) of the divided regions B5 and B6 (lower divided regions).
- the width ha (the length in the vertical direction) of the divided region) is shorter.
- the divided areas B1 to B4 are set as upper half areas in the image IMG1, and the divided areas B5 and B6 are set as lower half areas in the image IMG1.
- the width ha of the divided areas B1 to B4 is set to 1/8 of the width H of the image IMG1
- the width hb of the divided areas B5 and B6 is set to 1/4 of the width H. ing.
- the empty area Bs can be specified more accurately by setting the width ha of the divided areas B1 to B4 positioned above the width hb of the divided areas B5 and B6 positioned on the lower side. It becomes.
- the setting method of the divided areas B1 to B6 is not limited to the above, and for example, the widths ha and hb of the plurality of divided areas B1 to B6 may be set so as to increase in order (stepwise) downward. Good.
- the width ha of at least one of the divided areas B2 to B4 (for example, divided area B4) constituting the upper divided area is equal to the width hb of the divided areas B5 and B6 constituting the lower divided area. It may be the same length. That is, the width ha of at least one of the divided areas constituting the upper divided areas B1 to B4 is set shorter than the width hb of at least one divided area constituting the lower divided areas B5 and B6. It only has to be.
- the display device 200 can perform the processing of the display device 100 or 100a of the first embodiment, and in that case, the effect of the display device 100 or 100a can be achieved. This also applies to the display devices 300 to 500 in the third and later embodiments described below.
- the display device 300 In the first embodiment, the process in the case of performing the blurring process on the sky area Bs based on the color of the sky area Bs has been described.
- the display device 300 according to the third embodiment generates the processed image IMG2 by performing the enhancement process or the blurring process in consideration of the position where the viewer 90 looks at the display unit 50 in addition to the process. To do.
- FIG. 9 is a diagram illustrating an example of the configuration of the display device 300.
- FIG. 10 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions A1 to A3 formed in the image IMG1.
- 11 to 14 are diagrams illustrating an example of the processed image IMG2 displayed on the display unit 50.
- FIG. 10 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions A1 to A3 formed in the image IMG1.
- 11 to 14 are diagrams illustrating an example of the processed image IMG2 displayed on the display unit 50.
- the display device 300 displays an image, and includes a control unit 10 b (image processing device), a display unit 50, a gazing point detection sensor 60 (sensor), and a storage unit 70. .
- the gazing point detection sensor 60 detects the gazing point F of the viewer 90 on the display unit 50, and transmits gazing point data indicating the detected gazing point F to the control unit 10b.
- the gazing point detection sensor 60 is realized by, for example, an eye tracker that detects the movement of the line of sight of the viewer 90 by detecting the movement of the eyeball of the viewer 90. Further, the position of the gazing point F is represented by, for example, xy coordinates arbitrarily set on the display unit 50.
- the control unit 10b controls the display device 300 in an integrated manner.
- the control unit 10b has an image processing function for performing a predetermined process on the image IMG1 illustrated in FIG. 10.
- the image generating unit 14b, the attention area specifying unit 31, and the object detecting unit 32 are provided. That is, the control unit 10b is different from the control unit 10 in that the control unit 10b includes an image generation unit 14b, an attention area specifying unit 31, and an object detection unit 32.
- this embodiment demonstrates as what the control part 10b is equipped with the structure of the control part 10, it can replace with the said structure and can also be equipped with the structure of the control part 10a.
- the processing of the attention area specifying unit 31 and the object detection unit 32 is performed when there is only one viewer. In other words, when the display device 300 determines that a plurality of viewers 90 are looking at the display unit 50 by the face detection function for detecting the face of the viewer 90, the attention area specifying unit 31 and the object detection unit 32. Do not perform the process. The same applies to the fourth and fifth embodiments.
- the attention area specifying unit 31 specifies the attention area TA, which is the divided area that the viewer 90 is paying attention to, among the divided areas A1 to A3, based on the gazing point F detected by the gazing point detection sensor 60. That is, the attention area specifying unit 31 specifies the attention area TA in the divided areas A1 to A3 and also specifies the non- attention area (the divided area other than the attention area TA).
- the attention area specifying unit 31 reads the image data stored in the storage unit 70, specifies the up-down direction of the image IMG1 indicated by the image data, and matches the up-down direction.
- the image IMG1 is divided into divided areas A1 to A3.
- the attention area specifying unit 31 determines which of the position coordinates indicating the divided areas A1 to A3 the gazing point (position coordinates) F indicated by the gazing point data acquired from the gazing point detection sensor 60 matches. That is, the attention area specifying unit 31 specifies which of the divided areas A1 to A3 the gazing point F is included in. Then, the attention area specifying unit 31 specifies the divided area including the gazing point F as the attention area TA. In the example of FIG. 10, the divided area A2 is specified as the attention area TA. Further, the divided areas A1 and A3 are specified as non-target areas.
- the attention area identification unit 31 transmits area identification data indicating the correspondence between the divided areas A1 to A3, the attention area TA, and the non-attention area to the object detection unit 32.
- the object detection unit 32 detects the objects Obj1 to Obj4 in the image IMG1. Specifically, the object detection unit 32 reads the image data stored in the storage unit 70 and detects the objects Obj1 to Obj4 by detecting edges (edge regions) in the image IMG1 indicated by the image data. To do.
- the object detection unit 32 uses, for example, a filter to detect, as an edge, a region in which the difference in pixel value between adjacent pixels is equal to or greater than a predetermined value (that is, a region where the brightness is rapidly changed).
- a predetermined value that is, a region where the brightness is rapidly changed.
- a closed region formed by (or a region considered to be closed) is detected as an object.
- detection is not limited to pixel value differences, and for example, an area where the difference in values indicating the hue is a predetermined value or more (that is, an area where the color changes rapidly) may be detected as an edge.
- edge processing based on the depth information may be performed.
- the object detection unit 32 detects whether the detected objects Obj1 to Obj4 exist in the attention area TA or the non-attention area indicated by the area specifying data, and the detection result is displayed as an image generation section. 14b.
- the object detection unit having a function of detecting that the objects Obj1 to Obj4 are present in the attention area TA may be referred to as a first object detection unit.
- the object detection unit having a function of detecting that the objects Obj1 to Obj4 are present in the non-attention area may be referred to as a second object detection unit.
- the object detection unit 32 will be described as having both functions of the first and second object detection units. However, each of the first and second object detection units may be provided as an individual functional unit.
- the object Obj1 is included in the divided area A2 which is the attention area TA, and is also included in the divided areas A1 and A3 which are non-attention areas.
- the object Obj2 is included in the divided area A2 that is the attention area TA and is also included in the divided area A3 that is the non-attention area.
- the object Obj3 is included in the divided area A2 that is the attention area TA and is also included in the divided area A1 that is the non-attention area.
- the object Obj4 is included only in the divided area A3 that is a non-target area.
- the object detection unit 32 gives a detection result indicating that the objects Obj1 to Obj3 are included in the attention area TA (divided area A2) and only the non-attention area (divided area A3) for the object Obj4.
- the detection result indicating that it is included is transmitted to the image generation unit 14b.
- the object detection unit 32 transmits coordinate data indicating the position coordinates of the objects Obj1 to Obj4 in the image IMG1 to the image generation unit 14b together with the detection result.
- the target object is included in both the attention area TA and the non-target area (target objects Obj1 to Obj3 shown in FIG. 10), they are included in both of them (the divided area A2 and the divided area A1). And / or A3) may be transmitted.
- the image generation unit 14b performs the same process as the image generation unit 14 (hereinafter also referred to as process P1). That is, when the execution determination unit 13 determines that the blur processing is performed on the sky region Bs, the image generation unit 14b performs the blur processing on the sky region Bs, thereby obtaining an image (hereinafter referred to as an image). IA). On the other hand, if it is determined not to perform the blurring process, the image IMG1 is generated as the image IA. In addition, the image generation unit 14b performs an enhancement process on the image IMG1 in at least a part of the attention area TA specified by the attention area specification unit 31 among the divided areas A1 to A3, and the non-target area.
- An image (hereinafter also referred to as image IB) is generated (hereinafter also referred to as process P2) by performing blurring processing at least in part.
- This enhancement processing is realized by a known method using, for example, a filter (edge enhancement filter).
- the image generation unit 14b generates the processed image IMG2 by combining the images IA and IB generated by the processes P1 and P2.
- the image generation unit 14b applies the result of the process to the post-processing image IMG2.
- the image generation unit 14b performs enhancement processing on at least a part of the non-processing area other than the sky area Bs in which the blurring process is performed in the attention area TA.
- the image generation unit 14b sequentially determines whether or not the color indicated by the pixel is sky blue for each of the plurality of pixels of the image IMG1. As this determination process, for example, the determination process in the execution determination unit 13 described in the first embodiment can be applied. Then, the pixel data indicating the corresponding pixel in the image IA is output for the pixel determined to be sky blue, and the pixel data indicating the corresponding pixel in the image IB is output for the pixel determined not to be sky blue, respectively. IMG2 is generated.
- the post-processing image IMG2 may be generated as follows.
- the image generation unit 14b When the image generation unit 14b generates the image IA by performing the blurring process on the sky region Bs, the position of the sky region Bs in the image IMG1 is naturally specified. Therefore, the image generation unit 14b generates a processed image IMG2 by generating a mask image in which the sky region Bs or other than the sky region Bs is masked, and performing a logical operation using the mask image, the image IA, and the image IB. To do.
- a reduced image may be used as the mask image. This is because the processing of the image generation unit 14b is a blurring process for the sky region Bs, so that a block-shaped adverse effect is hardly generated in the processed image IMG2. For example, when a 9 ⁇ 9 low-pass filter is used for the blurring process, it is possible to use a mask image having a size 1/8 of the image IMG1. When a reduced image is used as a mask image, interpolation processing using an intra-block address may be performed between the mask image and the images IA and IB.
- the image generation unit 14b generates the post-processing image IMG2 shown in FIGS. 11 to 14, for example, by performing the above-described processing.
- the post-processing image IMG2 shown in FIG. 11 reflects the image IA generated as a result of the following processing P1. That is, the execution possibility determination unit 13 determines that the sky region Bs is the target of the blurring process, and the image generation unit 14b performs the blurring process on the sky region Bs. With this blurring process, the blurring process is also performed on the edge between the sky region Bs and the object Obj1 and the edge between the sky region Bs and the object Obj3.
- processed image IMG2 shown in FIG. 11 reflects the image IB generated as a result of the following process P2.
- the image generation unit 14b performs an enhancement process on an object having at least a part in the attention area TA based on the detection result acquired from the object detection unit 32, and Then, the blurring process is performed on the object existing only in the non-attention area (processing PA).
- the image generation unit 14b specifies the positions of the objects Obj1 to Obj3 in the image IMG1 using the coordinate data, and performs an enhancement process on the objects Obj1 to Obj3. That is, the edges of the objects Obj1 to Obj3 are emphasized. In addition, even when the entire target object is included in the attention area TA, the image generation unit 14b performs enhancement processing on the target object.
- the image generation unit 14b specifies the position of the object Obj4 in the image IMG1 using the coordinate data, and performs a blurring process on the object Obj4. That is, the edge of the object Obj4 is blurred.
- the object Obj1 exists over all the divided areas A1 to A3. Therefore, the enhancement process is performed on the object Obj1 regardless of whether the attention area TA is the divided area A1 or A3. Since the object Obj2 exists across the divided areas A2 and A3, the object Obj2 is subjected to enhancement processing when the attention area TA is either the divided area A2 or A3, and the divided area A1. In the case of, a blurring process is performed. Since the object Obj3 exists across the divided areas A1 and A2, the object Obj3 is subjected to enhancement processing when the attention area TA is either the divided area A1 or A2, and the object Obj3 is the divided area A3. The blurring process is performed.
- the object Obj4 exists only in the divided area A3, the object Obj4 is subjected to enhancement processing when the attention area TA is the divided area A3, and is subjected to blurring processing when the attention area TA is the divided areas A1 and A2. .
- the image generation unit 14b combines the images IA and IB generated by the above processing so that the sky region Bs subjected to the blurring processing of the image IA is preferentially applied, thereby performing the post-processing shown in FIG.
- An image IMG2 is generated.
- the process P2 PA
- all the edges of the objects Obj1 to Obj3 are emphasized.
- the edges (i) and (ii) that have been subjected to the blurring process in the process P1 are described. It is blurred.
- the image generation unit 14b performs enhancement processing only on the part included in the non-processing area.
- most of the divided area A1 is the empty area Bs. Therefore, even if there is a gazing point F in the divided area A1, the sky area Bs subjected to the blurring process of the image IA is preferentially applied, and thus the edge of the object Obj3 included in the divided area A1 is blurred. Become.
- the post-processing image IMG2 illustrated in FIG. 12 reflects the image IA generated as a result of the same processing P1 as in FIG.
- processed image IMG2 shown in FIG. 12 reflects an image IB generated as a result of the following process P2.
- the image generation unit 14b performs processing P2 on an object having at least a part in the attention area TA based on the position of the lowermost ends Obj1b to Obj4b (lower ends) of the objects Obj1 to Obj4 in the image IMG1. It is then determined whether or not to perform enhancement processing (processing PB).
- the image generation unit 14b determines whether or not each of the objects Obj1 to Obj4 is included only in the attention area TA or the non-attention area using the detection result. If the image generation unit 14b determines that the entire object is included in the attention area TA, the image generation unit 14b performs enhancement processing on the object. In addition, when the image generation unit 14b determines that the entire object is included only in the non-attention area, the image generation unit 14b performs a blurring process on the object.
- the image generation unit 14b determines that the entire objects Obj1 to Obj4 are not included in the attention area TA (that is, the object exists across the attention area TA and the non-attention area). In this case, the position coordinates of the lowest ends Obj1b to Obj4b in the image IMG1 of the objects Obj1 to Obj4 are specified using the coordinate data. Then, the image generation unit 14b determines whether or not the position coordinates are included in the divided area that exists at the lower end of the attention area TA.
- the image generation unit 14b determines that the position coordinates are included in the divided region existing at the lower end of the attention region TA.
- the image generation unit 14b determines that the enhancement process is performed on the target object.
- the image generation unit 14b determines that the position coordinates are not included in the divided area existing at the lower end of the attention area TA (that is, the lowermost end of the target exists in the attention area TA).
- the object is determined to be blurred.
- the image generation unit 14b determines to perform the blurring process on the object Obj4.
- the image generation unit 14b determines whether or not the lowermost end Obj1b to Obj3b is included in the divided area A3 at the lower end than the attention area TA.
- the bottom end Obj1b of the object Obj1 and the bottom end Obj2b of the object Obj2 are each included in the divided area A3. For this reason, the image generation unit 14b determines that the enhancement processing is performed on the objects Obj1 and Obj2. On the other hand, the lowermost end Obj3b of the object Obj3 is not included in the divided area A3 (included in the attention area TA). For this reason, the image generation unit 14b determines to perform the blurring process on the object Obj3.
- the image generation unit 14b determines that the viewer 90 is paying attention to the objects Obj1 and Obj2 that mainly exist in the vicinity thereof, and emphasizes the objects Obj1 and Obj2. Processing is performed, and blurring processing is performed on the other objects Obj3 and Obj4. Therefore, even if the viewer 90 looks at the object Obj3 (around the mountainside) in the attention area TA, if the attention area TA is the divided area A2, it is determined that the object Obj3 is not being watched, and the object Blur processing is performed on Obj3.
- the image generation unit 14b generates the processed image IMG2 shown in FIG. 12 by combining the images IA and IB.
- the post-processing image IMG2 shown in FIG. 12 all edges of the objects Obj1 and Obj2 are emphasized in the process P2 (PB), but the object Obj1 is reflected in the non-processing area by reflecting the process P1. Only some of the included edges are highlighted.
- the image generation unit 14b determines whether or not to perform enhancement processing on the objects Obj1 to Obj4 having at least a part in the non-processing area based on the position of the lowest end Obj1b to Obj4b. . Then, based on the determination result, the image generation unit 14b performs enhancement processing on only a portion of the target object included in the non-processing region (a part of the target object Obj1 and the entire target object Obj2 in the example of FIG. 12). I do.
- the post-processing image IMG2 is subjected to enhancement or blurring processing as shown in FIG.
- the processing PB is performed, the post-processing image IMG2 is subjected to enhancement or blurring processing as shown in FIG.
- FIG. 15 is a flowchart illustrating an example of processing in the display device 300.
- SA sky region blur determination process
- FIG. 15 refers to S1 to S7 and S9 in FIG. 5, and as a result, an image IA is generated.
- the image IA may be generated by the processing of S11 to S14, S6, S7, and S9 shown in FIG.
- the attention area specifying unit 31 divides the image IMG1 into divided areas A1 to A3. Then, the attention area specifying unit 31 specifies the attention area TA from the divided areas A1 to A3 based on the gazing point data indicating the gazing point F detected by the gazing point detection sensor 60 (S21).
- the object detection unit 32 reads the image data from the storage unit 70 (S22), and detects an edge in the image IMG1 indicated by the image data (S23). Then, the object detection unit 32 detects the objects Obj1 to Obj4 included in the image IMG1 using the detected edge (S24), and the objects Obj1 to Obj4 are either the attention area TA or the non-attention area. It is detected (determined) whether it exists in the area (S25).
- the image generation unit 14b generates an image IB based on the detection result of the object detection unit 32. Specifically, the image generation unit 14b performs enhancement processing on the image IMG1 in at least a part of the attention area TA (S26), and performs blurring processing on at least a part of the non-attention area (S27). Specifically, the processing PA or PB is performed in S26 and S27, and as a result, the image IB is generated.
- the image generation unit 14b generates the processed image IMG2 by combining the image IA generated by the sky region blur determination process (SA) and the image IB generated by the processes of S21 to S27 (S28; Image generation step). Then, the image generation unit 14b displays the generated processed image IMG2 on the display unit 50 (S8).
- SA sky region blur determination process
- S28 image generation step
- processing of S21 and the processing of S22 to S24 may be performed in parallel, or the processing of S21 may be performed after the processing of S22 to S24.
- the processes of S26 and S27 may be performed in parallel, or the process of S26 may be performed after the process of S27.
- the processing of S21 to S27 may be performed after the processing of SA, or the processing of SA may be performed after the processing of S21 to S27.
- the display device 300 determines whether or not the blurring process can be performed on the sky region Bs, and performs the enhancement or blurring process based on the gazing point F in addition to the process P1 that performs the blurring process on the sky region Bs according to the determination result. P2 is performed. Therefore, the display device 300 can bring the sky region Bs closer to the actual appearance in the natural world, and the actual objects in the natural world can be obtained for the objects Obj1 to Obj4 without detecting the gaze point F with high accuracy. A three-dimensional appearance close to the appearance can be obtained.
- the image generation unit 14b is described as performing the enhancement process on at least a part of the attention area TA and the blurring process on at least a part of the non-attention area for the image IMG1 in the process P2. Not limited to this.
- the image generation unit 14b does not necessarily need to perform both the enhancement process and the blurring process in the process P2, and may be configured to perform only the enhancement process in at least a part of the attention area TA (non-processing area). That is, it is not always necessary to perform the blurring process on at least a part of the non-target area in the image generation unit 14b.
- the image generation unit 14b may perform only the blurring process in at least a part of the non-target area in the process P2. In other words, the image generation unit 14b may not perform the enhancement process on at least a part of the attention area TA.
- the display device 300 has been described as including the control unit 10b, the display unit 50, and the gazing point detection sensor 60.
- the display device 300 is not limited thereto, and the display device 300 includes the control unit 10b, the display unit 50, and the note.
- the viewpoint detection sensor 60 may be configured separately.
- an external device image processing device
- the gazing point detection sensor 60 only needs to be communicably connected to the display device 300 including the control unit 10b or the external device.
- Embodiment 4 of the present disclosure will be described below with reference to FIGS. 16 to 19.
- members having the same functions as those described in the above embodiment are denoted by the same reference numerals and description thereof is omitted.
- FIG. 16 is a diagram illustrating an example of the configuration of the display device 400.
- FIG. 17 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions A1 to A3 formed in the image IMG1.
- FIG. 18 is a diagram illustrating an example of the processed image IMG2 displayed on the display unit 50.
- the display device 400 displays an image, and includes a control unit 10c (image processing device), a display unit 50, a gazing point detection sensor 60, and a storage unit 70.
- a control unit 10c image processing device
- a display unit 50 display unit 50
- a gazing point detection sensor 60 display unit 50
- a storage unit 70 storage unit 70
- the control unit 10c controls the display device 400 in an integrated manner.
- the control unit 10c has an image processing function for performing a predetermined process on the image IMG1 illustrated in FIG. 17.
- the image generating unit 14c, the attention area specifying unit 31, and the edge detecting unit 42 are provided. That is, the control unit 10c is different from the control unit 10b in that it includes an image generation unit 14c and an edge detection unit 42.
- the edge detection unit 42 reads the image data stored in the storage unit 70, detects an edge in the image IMG1 indicated by the image data by the same method as that of the object detection unit 32 of the third embodiment, and obtains the detection result. It transmits to the image generation part 14c. Unlike the object detection unit 32, the edge detection unit 42 performs only edge detection on the image IMG1, and does not detect the object included in the image IMG1.
- the image generation unit 14c performs an enhancement process on the image IMG1 over the entire attention area TA identified by the attention area identification unit 31, and performs a blurring process on the entire non-attention area. (Process P3).
- the image generation unit 14c combines the image IA generated in the process P1 and the image IC generated in the process P3, thereby processing the post-process image IMG2 as shown in FIG. Is generated.
- the image generation unit 14c determines which of the divided areas A1 to A3 is the attention area TA, and performs enhancement processing or blurring for each of the divided areas A1 to A3. Process.
- the image generation unit 14c acquires the area specifying data, and specifies that the divided area A2 is the attention area TA and the divided areas A1 and A3 are the non-attention areas. Based on the detection result of the edge detection unit 42, the image generation unit 14c performs enhancement processing on the edges included in the divided area A2 that is the attention area TA, and is included in the divided areas A1 and A3 that are non-target areas.
- An image IC is generated by performing a blurring process on the edge to be processed.
- the image generation unit 14c generates the processed image IMG2 illustrated in FIG. 18 by combining the image IA obtained by performing the blurring process on the sky region Bs and the image IC. In the post-processing image IMG2, only edges (edges other than the edges (i) and (ii) above) included in the non-processed area of the attention area TA are emphasized, and other edges are blurred.
- FIG. 19 is a flowchart illustrating an example of processing in the display device 400.
- the image generation unit 14c After the processing of S23, the image generation unit 14c generates an image IC based on the detection result of the edge detection unit 42. Specifically, the image generation unit 14c performs enhancement processing on the image IMG1 in the entire attention area TA (S34) and blurring processing in the entire non-target area (S35). The image generation unit 14c generates the processed image IMG2 by synthesizing the image IA generated by the SA process and the image IC generated by the processes of S21 to 23, S34, and S35 (S28).
- process of S21 and the process of S22 and S23 may be performed in parallel, and the process of S21 may be performed after the process of S22 and S23.
- processes of S34 and S35 may be performed in parallel, or the process of S34 may be performed after the process of S35.
- processing of S21 to 23, S34 and S35 may be performed after the processing of SA, or the processing of SA may be performed after the processing of S21 to 23, S34 and S35.
- the display device 400 performs the enhancement process on the attention area TA and performs the blurring process on the non-attention area without specifying the objects Obj1 to Obj4.
- the image generation unit 14c performs enhancement processing on the entire non-processing area for the attention area TA. Therefore, the display device 400 can generate the image IC by a simpler process than the process P2 of the third embodiment. That is, the display device 400 can generate the processed image IMG2 by a simpler process than the display device 300.
- the control unit 10c may determine whether or not there is a sky region Bs that is subject to blurring processing only for the identified attention area TA, and may perform blurring processing on the sky region Bs according to the determination result.
- the relationship specifying unit 11, the empty region specifying unit 12, and the execution feasibility determining unit 13 set the target region TA specified by the target region specifying unit 31 as a processing target.
- the image generation unit 14c performs a blurring process on the non-attention area, and for the attention area TA, when it is determined that the sky area Bs is a target of the blurring process, the image generation unit 14c performs a blurring process on the sky area Bs.
- a post-processing image IMG2 is generated by performing enhancement processing for the processing region.
- the blurring process is performed on the non-attention area. For this reason, even if the result of the process P1 is preferentially applied, the blurring process is performed on the non-attention area regardless of the presence or absence of the empty area Bs. That is, in the processes P1 and P3, the blurring process may overlap when there is an empty area Bs in the non-target area. As described above, when the determination as to whether or not to perform the blurring process on the sky region Bs is not performed on the non-attention region but only on the attention region TA, the blurring processing overlaps on the non-attention region. It can be avoided. That is, it is possible to efficiently determine (specify) a region to be subjected to the blurring process in the image IMG1, and to simplify the process.
- a backlight (not shown) to be controlled may be made to coincide with the above-described area where the blurring process is performed.
- the area can be set with a degree of freedom in accordance with the image effect of the image IMG1 and the system or mechanism requirements of the display device 400.
- FIG. 20 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions C1 to C9 formed in the image IMG1.
- the nine divided regions C1 to C9 are set so that the width H of the image IMG1 is approximately equal to nine. That is, the widths hc (lengths in the vertical direction) of the divided regions C1 to C9 are all hc ⁇ H / 9.
- FIG. 21 and FIG. 22 are diagrams for explaining an example of processing of the display device 500, respectively.
- the gazing point F exists in the divided area C4.
- the gazing point F exists in the divided area C8.
- FIGS. 21 and 22 show cases where the positions of the gazing point F are different from each other.
- the attention area specifying unit 31 specifies the attention area TC based on the gazing point F as in the third and fourth embodiments.
- the attention area specifying unit 31 specifies the divided area C4 as the attention area TC.
- the attention area specifying unit 31 specifies the divided area C8 as the attention area TC.
- the attention area specifying unit 31 further specifies a neighboring area NC that is a divided area located in the vicinity of the attention area TC among the divided areas C1 to C9.
- the attention area specifying unit 31 specifies a divided area adjacent to the attention area TC in the vertical direction as the neighboring area NC.
- the neighboring area NC above the attention area TC is also referred to as an upper neighboring area NU.
- the lower neighboring area NC on the attention area TC is also referred to as a lower neighboring area NL.
- the attention area specifying unit 31 specifies the divided area C3 as the upper vicinity area NU and specifies the divided area C5 as the lower vicinity area NL. That is, the attention area specifying unit 31 specifies the divided areas C3 and C5 as neighboring areas NC with respect to the attention area TC (divided area C4).
- the attention area specifying unit 31 specifies the divided area C7 as the upper vicinity area NU and specifies the divided area C9 as the lower vicinity area NL. That is, the attention area specifying unit 31 specifies the divided areas C7 and C9 as neighboring areas NC with respect to the attention area TC (divided area C8).
- the attention area specifying unit 31 sets (specifies) the attention area TC and the neighboring area NC that have already been specified as new attention areas.
- the attention area newly set by the attention area specifying unit 31 is referred to as a new attention area TC2.
- the attention area specifying unit 31 sets the divided area C4 as the attention area TC and the divided areas C3 and C5 as the neighboring areas NC as the new attention area TC2. That is, the attention area specifying unit 31 sets the three divided areas C3 to C5 as the new attention area TC2, and the six divided areas (the divided areas C1 to C2 and C6 to C9) excluding the new attention area TC2 as the non- attention areas. Set each.
- the attention area specifying unit 31 sets the divided area C8 as the attention area TC and the divided areas C7 and C9 as the neighboring areas NC as the new attention area TC2. That is, the attention area specifying unit 31 sets the three divided areas C7 to C9 as the new attention area TC2 and the six divided areas (the divided areas C1 to C6) excluding the new attention area TC2 as the non- attention areas, respectively. .
- the processing P1 and the processing P2 or P3 described above are performed as in the third and fourth embodiments, and the processed image IMG2 is generated by combining these processing results.
- the attention area TC is likely to change as the gazing point F moves. For this reason, in the processed image IMG2, there is a concern that the vicinity of the boundary may appear unnatural to the viewer 90. This is also remarkable when the number of divided areas is small.
- the display device 500 is provided with a larger number of divided regions than the display devices 300 and 400 described above, and adds a region NC near the region of interest TC as a margin to the region of interest TC.
- the region of interest TC2 is set.
- the enhancement process can be performed for the non-processing area. Therefore, even when the gazing point F moves, the vicinity of the boundary of the attention area TC The possibility that the viewer 90 looks unnatural can be reduced.
- the gazing point F moves outside the attention area TC, the moved gazing point F exists inside the neighboring area NC. This is because it is expected.
- the non-processed region included in the attention region (new attention region TC2) wider than the display devices 300 and 400 can be the target of the enhancement processing, and thus there is no sense of incongruity. It is possible to provide the viewer 90 with the processed image IMG2.
- the case where the number of divided areas is nine is exemplified, but the number of divided areas is not limited to this, and may be five or more.
- the case where the divided area adjacent to the attention area TC in the vertical direction is specified as the neighboring area NC is exemplified, but the method of specifying the neighboring area NC is not limited to this.
- the attention area specifying unit 31 specifies up to N1 divided areas when viewed upward from the attention area TC as the upper neighboring area NU, and up to N2 divided areas when viewed downward from the attention area TC. May be specified as the lower vicinity region NU.
- the attention area specifying unit 31 may specify the upper neighboring area NU composed of N1 divided areas and the lower neighboring area NL composed of N2 divided areas as the neighboring area NC.
- the values of N1 and N2 may be set in advance by the designer of the display device 500 according to the number of divided areas. That is, the extent of the “nearby” range from the attention area TC may be determined as appropriate by the designer of the display device 500. Further, the values of N1 and N2 may be set so as to be changeable by the user of the display device 500.
- control blocks (particularly the control units 10, 10a, 10b, and 10c) of the display devices 100, 100a, 200, 300, 400, and 500 are realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like. Alternatively, it may be realized by software using a CPU (Central Processing Unit).
- a logic circuit hardware
- IC chip integrated circuit
- CPU Central Processing Unit
- the display devices 100, 100a, 200, 300, 400, and 500 are configured such that a CPU that executes instructions of a program that is software that realizes each function, and the program and various data can be read by a computer (or CPU)
- a ROM (Read Only Memory) or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided.
- the objective of this indication is achieved when a computer (or CPU) reads and runs the said program from the said recording medium.
- a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
- an arbitrary transmission medium such as a communication network or a broadcast wave
- one aspect of the present disclosure can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
- the image processing apparatus specifies a sky area (Bs, Bs1, Bs2, Bsn) that corresponds to the sky in the display target image (image IMG1). Based on the color of the sky region specified by the region specifying unit (12, 12a) and the sky region specifying unit, an execution feasibility determining unit (13, 13a) for determining whether to perform the blurring process on the sky region. ) And an image generation unit (14, 14b, which generates a post-processing image (IMG2) by performing a blurring process on the sky area when the execution determination unit determines that the blurring process is to be performed. 14c).
- a sky area Bs, Bs1, Bs2, Bsn
- an execution feasibility determining unit 13, 13a
- an image generation unit which generates a post-processing image (IMG2) by performing a blurring process on the sky area when the execution determination unit determines that the blurring process is to be performed. 14c).
- the blurring process is performed on the sky area based on the determination result. For example, when the sky region has a specific color, it is highly likely that the sky region is a surface color. By performing the blurring process on the sky region that is the surface color, it is possible to bring the appearance unique to the surface color, that is, the sky region closer to a psychological appearance.
- the image processing device it is possible to generate a post-processing image including a sky region that is close to the actual appearance in the natural world.
- the sky area specifying unit specifies the distant view area as the sky area when the display target image includes a distant view area. May be.
- the distant view area included in the display target image can be specified as the sky area.
- the image processing apparatus (10) in the second aspect, includes a plurality of divided regions (A1 to A3, B1 to B6, and the like) formed by dividing the display target image in the vertical direction.
- the sky region specifying unit includes a relationship specifying unit (11) for specifying the relationship between the brightness of each of the plurality of pixels constituting the display target image and the number of pixels having the brightness.
- the number of the pixels indicated by the peak in the graph (lightness distribution) expressing the relationship for each of the plurality of divided regions specified by the relationship specifying unit is positioned on the upper side in the display target image. When there is a set of divided areas decreasing from the divided area toward the divided area located below, or when the number of pixels is constant in the display target image
- the region formed by the pixels corresponding to the peak may be specified as the distant view area.
- the distant view area When the distant view area is included in the display target image, the distant view area is generally included more in the upper side than the lower side of the display target image. In general, the width of the graph in the sky region is relatively narrow. Considering these points, when the number of pixels indicated by the peak is reduced or constant as described above, a distant view area as a sky area may be included in the display target image. high.
- the sky area specifying unit specifies pixels constituting the distant view area
- the execution determination unit (13) is configured to specify the sky area specifying unit.
- the distant area is a sky area having a specific color
- blur processing is performed on the sky area.
- a sky region for example, a blue sky
- a surface color can be brought close to an actual appearance in the natural world.
- the image processing apparatus (10a) is configured to display the display target image for each of the plurality of divided regions formed by dividing the display target image in the vertical direction in aspect 2.
- Each of the colors of a plurality of constituting pixels is classified into one of a plurality of predetermined colors, and includes a pixel counting unit (21) that counts the number of pixels for each of the predetermined colors. 12a), in any one of the predetermined colors, the pixel counting unit moves from the upper divided area (A1) to the lower divided area (A3) in the display target image.
- the region that is formed may be specified as the distant view area.
- the distant view area is generally included more in the upper side than the lower side of the display target image.
- the color distribution in the sky region is relatively narrow. Considering these points, when the number of pixels classified into any one of the predetermined colors as described above is reduced or constant, a distant view area as a sky area is displayed. There is a high possibility that it is included in the target image.
- the determination can be performed relatively easily.
- the execution determination unit (13a) is configured such that the predetermined color corresponding to the pixels constituting the specified far-field region is a specific color. In some cases, it may be determined that the blurring process is performed on the sky region.
- the blurring process can be performed on a distant view area as a sky area having a specific color. For this reason, in the processed image, for example, a sky region (for example, a blue sky) that is a surface color can be brought close to an actual appearance in the natural world.
- a sky region for example, a blue sky
- the execution determination unit may include an empty area included in the upper divided area (A1 or A2) among the plurality of divided areas.
- the contact area with the empty area included in the lower divided area (A2 or A3) adjacent to the upper divided area is less than a predetermined amount, only the empty area included in the upper divided area is blurred. It may be a target.
- the sky area may not be an area corresponding to the actual sky, but may indicate an object having a specific color that is different from the sky.
- the contact area is less than the predetermined amount, it is determined that the object as described above exists in the lower divided area, and the blurring process is not performed on the object. Can be. For this reason, it is possible to prevent a problem such as performing blurring processing unintentionally even for an object whose entity can be recognized.
- the image is positioned above the divided area (B5, B6) located on the lower side among the plurality of divided areas.
- the divided regions (B1 to B4) to be performed may be shorter in the vertical direction (width ha ⁇ width hb).
- the empty area is often included above the display target image. For this reason, as in the configuration described above, it is possible to specify the empty region with high accuracy by shortening the length in the vertical direction in the upper divided region than in the lower divided region.
- a display device 300, 400, 500 having a display surface (display unit 50) for displaying the display target image;
- the display target image can be connected to a sensor (gaze point detection sensor 60) that detects the gaze point (F) of the viewer (90) on the display surface in a communicable manner and is based on the gaze point detected by the sensor.
- a sensor gaze point detection sensor 60
- the attention area specifying unit for specifying the attention area (TA, TC, new attention area TC2) that is the divided area that the viewer is paying attention to (31), and the image generation unit (14b, 14c) applies to at least a part of the non-process area other than the sky area to be subjected to the blurring process in the attention area. Enhancement processing may be performed.
- the image generation unit can perform the enhancement processing on the non-processing region based on the result of the attention region specifying unit specifying the attention region based on the gazing point.
- the image processing apparatus simply identifies which divided area of the display surface the viewer is watching, and the portion where the emphasis process is performed in the non-process area and the other areas where the emphasis process is not performed. It is possible to generate a post-processing image that causes a relative difference in appearance between the portions.
- the image processing apparatus can generate a post-processed image that provides a stereoscopic effect close to the actual appearance in the natural world without performing high-precision detection of the point of sight. Therefore, it is possible to generate a post-processing image that provides the above three-dimensional effect with a simple process.
- a display device (100, 100a, 200, 300, 400, 500) according to aspect 10 of the present disclosure includes the image processing device according to any one of aspects 1 to 9.
- control method of the image processing device includes a sky region specifying step of specifying a sky region corresponding to the sky in the display target image, and a color of the sky region specified in the sky region specifying step.
- a sky region specifying step of specifying a sky region corresponding to the sky in the display target image
- a color of the sky region specified in the sky region specifying step Based on the execution possibility determination step for determining whether or not to perform the blurring process for the sky region, and the blur processing for the sky region when it is determined that the blurring process is to be performed in the execution permission determination step.
- An image generation step of generating a post-processing image by performing.
- the image processing apparatus may be realized by a computer.
- the image processing apparatus is operated on each computer by causing the computer to operate as each unit (software element) included in the image processing apparatus.
- An image processing program of an image processing apparatus to be realized in this manner and a computer-readable recording medium on which the image processing program is recorded also fall within the scope of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The purpose of the present invention is to generate an image which approximates how an object would actually appear in nature. Provided is a display device (100), comprising: a sky region identification unit (12) which identifies a sky region (Bs) in an image (IMG1); an execution assessment unit (13) which, on the basis of the color of the sky region, assesses whether to carry out a blur process upon the sky region; and an image generating unit (14) which, if it has been assessed that the blur process is to be carried out, generates a processed image (IMG2) by carrying out the blur process upon the sky region.
Description
以下の開示は、画像を処理する画像処理装置に関する。
The following disclosure relates to an image processing apparatus that processes an image.
従来より、表示装置において、鑑賞者にとってより自然な見え方となる画像を表示するための様々な技術が提案されている。一例として、特許文献1には、自然界での実際の見え方に近い立体感が得られるように、対象物(オブジェクト)を画面に表示することを目的とした表示装置(画像表示装置)が開示されている。
Conventionally, various technologies have been proposed for displaying images that are more natural for viewers in display devices. As an example, Patent Document 1 discloses a display device (image display device) for displaying an object (object) on a screen so as to obtain a stereoscopic effect close to the actual appearance in the natural world. Has been.
例えば、人間が自然界においてある景色を見る場合、当該人間の注視点の付近に存在する対象物については、眼のピントが合っているため、明瞭に視認される。他方、注視点から離れた位置に存在する対象物については、眼のピントが合っていないため、ぼやけて視認される。このような対象物の見え方により、人間は立体感を得ることができる。
For example, when a human sees a certain scene in the natural world, an object existing in the vicinity of the human gaze point is clearly visible because the eye is in focus. On the other hand, an object present at a position away from the point of sight is visually blurred because the eye is out of focus. A human can obtain a three-dimensional effect by such an appearance of the object.
特許文献1の表示装置は、このような対象物の見え方を再現することを目的として構成されている。より具体的には、特許文献1の表示装置には、対象物を表す画像情報に対し、注視点から当該対象物の位置までの距離に応じた拡散処理(加工処理)を施す画像加工手段と、拡散処理が施された画像情報に基づいて、表示画面に表示すべき各画素を表した画素情報を発生する画像再生手段とが設けられている。
The display device of Patent Document 1 is configured for the purpose of reproducing the appearance of such an object. More specifically, the display device of Patent Document 1 includes image processing means for performing diffusion processing (processing processing) according to the distance from the gazing point to the position of the target object on the image information representing the target object. And image reproducing means for generating pixel information representing each pixel to be displayed on the display screen based on the image information subjected to the diffusion processing.
また、特許文献2には、ユーザの視線(ユーザがコンピュータのディスプレイをいかに見ているか)に基づいて、当該ディスプレイの表示を変更する技術が開示されている。
Patent Document 2 discloses a technique for changing the display of the display based on the user's line of sight (how the user looks at the computer display).
しかしながら、特許文献1および2の発明では、ユーザの視線に基づいて拡散処理を実行するか、またはディスプレイの表示を変更しているが、自然界での実際の見え方に近い画像を生成するには至っていない。
However, in the inventions of Patent Documents 1 and 2, diffusion processing is executed based on the user's line of sight or the display on the display is changed. To generate an image close to the actual appearance in the natural world Not reached.
本開示の一態様は、自然界での実際の見え方に近い画像を生成する画像処理装置を実現することを目的とする。
An object of one aspect of the present disclosure is to realize an image processing apparatus that generates an image close to an actual appearance in the natural world.
上記の課題を解決するために、本開示の一態様に係る画像処理装置は、表示対象画像において空に相当する空領域を特定する空領域特定部と、上記空領域特定部によって特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定部と、上記実行可否判定部によって上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像を生成する画像生成部と、を備えている。
In order to solve the above problem, an image processing apparatus according to an aspect of the present disclosure includes a sky area specifying unit that specifies a sky area corresponding to the sky in a display target image, and the sky area specifying unit that specifies the sky area specifying unit. Based on the color of the sky area, when it is determined by the execution determination unit that determines whether or not to perform the blurring process for the sky area and the blur determination process by the execution determination unit, the sky area An image generation unit that generates a post-processing image by performing a blurring process.
また、上記の課題を解決するために、本開示の一態様に係る画像処理装置の制御方法は、表示対象画像において空に相当する空領域を特定する空領域特定工程と、上記空領域特定工程において特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定工程と、上記実行可否判定工程において上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像を生成する画像生成工程と、を含んでいる。
In order to solve the above problem, a control method for an image processing device according to an aspect of the present disclosure includes a sky region specifying step of specifying a sky region corresponding to the sky in a display target image, and the sky region specifying step. An execution feasibility determination step for determining whether to perform blurring processing on the sky region based on the color of the sky region specified in step, and when it is determined that the blurring processing is performed in the execution feasibility determination step And an image generation step of generating a post-processing image by performing a blurring process on the sky region.
本開示の一態様に係る画像処理装置およびその制御方法によれば、自然界での実際の見え方に近い画像を生成することができるという効果を奏する。
According to the image processing apparatus and the control method thereof according to one aspect of the present disclosure, there is an effect that it is possible to generate an image that is close to the actual appearance in the natural world.
〔実施形態1〕
以下、本開示の実施形態1について、図1~図5に基づいて詳細に説明する。Embodiment 1
Hereinafter,Embodiment 1 of the present disclosure will be described in detail based on FIGS. 1 to 5.
以下、本開示の実施形態1について、図1~図5に基づいて詳細に説明する。
Hereinafter,
<本開示の一態様の概要および面色について>
一般に、物理的な色は三刺激値(X,Y,Z)で定義することができる。三刺激値は、約400nm~800nmの各波長における(光源の輝度)×(物体の反射率)×(人間の視感度)に基づいて算出された値を積算することで得られる。この三刺激値で表現される色は絶対的な色であるため、互いに同じ刺激値を示す色は、同じ色に見えるはずである。しかしながら、人間は、状況または環境に応じて、互いに同じ刺激値を示す色であっても心理的に異なった色に感じることがある。 <About the outline and surface color of an embodiment of the present disclosure>
In general, a physical color can be defined by tristimulus values (X, Y, Z). The tristimulus values are obtained by integrating values calculated based on (luminance of light source) × (reflectance of object) × (human visual sensitivity) at each wavelength of about 400 nm to 800 nm. Since the color represented by these tristimulus values is an absolute color, colors that show the same stimulus value should look the same color. However, depending on the situation or environment, a human may feel a psychologically different color even if the colors show the same stimulus value.
一般に、物理的な色は三刺激値(X,Y,Z)で定義することができる。三刺激値は、約400nm~800nmの各波長における(光源の輝度)×(物体の反射率)×(人間の視感度)に基づいて算出された値を積算することで得られる。この三刺激値で表現される色は絶対的な色であるため、互いに同じ刺激値を示す色は、同じ色に見えるはずである。しかしながら、人間は、状況または環境に応じて、互いに同じ刺激値を示す色であっても心理的に異なった色に感じることがある。 <About the outline and surface color of an embodiment of the present disclosure>
In general, a physical color can be defined by tristimulus values (X, Y, Z). The tristimulus values are obtained by integrating values calculated based on (luminance of light source) × (reflectance of object) × (human visual sensitivity) at each wavelength of about 400 nm to 800 nm. Since the color represented by these tristimulus values is an absolute color, colors that show the same stimulus value should look the same color. However, depending on the situation or environment, a human may feel a psychologically different color even if the colors show the same stimulus value.
ドイツの心理学者David Katzは、色の見え方が、三刺激値だけではなく対象(物体)の定位性またはテクスチャによっても異なるとの知見から、心理的な色の分類を行っている。この分類において「面色」という色の定義がある。この面色とは、定位性または物体表面のテクスチャを明確に知覚することができないような見え方となる色であって、色としての属性以外の属性を感じ取ることができないような色を指す。その一例として、青空の色(「空色」と称することもある)が挙げられる。
German psychologist David Katz classifies psychological colors based on the knowledge that the appearance of colors varies not only with tristimulus values but also with the localization or texture of the object (object). In this classification, there is a definition of “surface color”. The face color refers to a color that can be perceived so that the localization or texture of the object surface cannot be clearly perceived, and that cannot be sensed with attributes other than the color attribute. One example is a blue sky color (sometimes referred to as “sky blue”).
実際に遠方の青空(背景)を見た場合、人間は、光の反射、散乱または太陽の状態等によって、青空にエッジまたはテクスチャのようなものを感じることがある。ただ、そのようなエッジまたはテクスチャに定位性は無い。すなわち、そのようなエッジまたはテクスチャは、遠方の青空を見直すたびに、その位置が変わったり消滅したりするものであり、青空と明確に異なる雲、鳥、飛行機、または夕焼けの空等とは異なり、人間が明確に意識する対象とはならない。
When actually looking at a distant blue sky (background), humans may feel something like an edge or texture in the blue sky due to light reflection, scattering or the state of the sun. However, such edges or textures are not stereotactic. That is, such an edge or texture changes its position or disappears every time a distant blue sky is reviewed, unlike a cloud, bird, airplane, or sunset sky that is clearly different from the blue sky. It is not a target that humans are clearly aware of.
このように一瞬のみ現れるエッジまたはテクスチャが一般の表示装置に表示された場合、当該エッジまたはテクスチャが実際に存在する物体から得られるもののように認識される可能性がある。これは、人間から表示装置の表示面までの距離が、人間から遠方の青空までの距離よりも非常に近いために、その物体の位置が心理的に人間から近い位置に特定されてしまうことから生じる。その結果、上記の場合、遠方の青空が不自然な見え方となってしまう可能性がある。
In this way, when an edge or texture that appears only for a moment is displayed on a general display device, the edge or texture may be recognized as being obtained from an actually existing object. This is because the distance from the person to the display surface of the display device is much closer than the distance from the person to the distant blue sky, so that the position of the object is psychologically specified to be close to the person. Arise. As a result, in the above case, the distant blue sky may appear unnatural.
本開示の一態様に係る画像処理装置は、空領域(例:遠景かつ背景)の色に基づいて空領域に対するぼかし処理を行うものである。具体的には、上記画像処理装置は、空領域の色が空色である場合に、当該空領域に対してぼかし処理を行うことにより、心理的に面色と認識され得る領域におけるエッジまたはテクスチャを積極的に排除する。これにより、面色を面色として知覚させることができ、自然な空領域の見え方を提供することができる。
The image processing apparatus according to an aspect of the present disclosure performs a blurring process on the sky region based on the color of the sky region (eg, distant view and background). Specifically, when the color of the sky region is sky blue, the image processing apparatus performs edge processing or a texture in a region that can be recognized as a face color psychologically by performing a blurring process on the sky region. Eliminate. Thereby, a surface color can be perceived as a surface color, and a natural way of viewing the sky region can be provided.
以下、本開示の一態様に係る画像処理装置の具体的な処理について説明する。
Hereinafter, specific processing of the image processing apparatus according to an aspect of the present disclosure will be described.
<表示装置100の構成>
まず、図1~図4を用いて、本実施形態の表示装置100について説明する。図1は、表示装置100の構成の一例を示す図である。図2は、表示部50(表示面)の表示対象となる画像IMG1(表示対象画像)、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図3の(a)~(d)は、各分割領域A1~A3における、明度と画素の数との関係を表現したグラフを用いて、空領域Bsを特定する手法の概念を説明するための図である。図4は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration ofDisplay Device 100>
First, thedisplay device 100 of this embodiment will be described with reference to FIGS. FIG. 1 is a diagram illustrating an example of the configuration of the display device 100. FIG. 2 is a diagram illustrating an example of an image IMG1 (display target image) to be displayed on the display unit 50 (display surface) and a plurality of divided regions A1 to A3 formed in the image IMG1. FIGS. 3A to 3D are diagrams for explaining the concept of a technique for identifying the sky region Bs using a graph expressing the relationship between the brightness and the number of pixels in each of the divided regions A1 to A3. FIG. FIG. 4 is a diagram illustrating an example of the processed image IMG2 displayed on the display unit 50.
まず、図1~図4を用いて、本実施形態の表示装置100について説明する。図1は、表示装置100の構成の一例を示す図である。図2は、表示部50(表示面)の表示対象となる画像IMG1(表示対象画像)、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図3の(a)~(d)は、各分割領域A1~A3における、明度と画素の数との関係を表現したグラフを用いて、空領域Bsを特定する手法の概念を説明するための図である。図4は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration of
First, the
なお、図2では、説明の簡略化のため、制御部10による処理前の画像IMG1(入力画像)が表示部50に表示されている例を示しているが、実際には後述のように、制御部10による処理後の画像(例えば図4に示す処理後画像IMG2)が表示部50に表示される。この点については、実施形態2以降の図10、図17、図20~図22についても同様である。また、これらの図に示す画像IMG1は同一である。
2 shows an example in which an image IMG1 (input image) before processing by the control unit 10 is displayed on the display unit 50 for the sake of simplification, but actually, as described later, An image processed by the control unit 10 (for example, the processed image IMG2 shown in FIG. 4) is displayed on the display unit 50. This also applies to FIGS. 10, 17, and 20 to 22 after the second embodiment. The images IMG1 shown in these figures are the same.
図1に示すように、表示装置100は、画像を表示するものであり、制御部10(画像処理装置)、表示部50、および記憶部70を備えている。制御部10については後述する。
As shown in FIG. 1, the display device 100 displays an image, and includes a control unit 10 (image processing device), a display unit 50, and a storage unit 70. The controller 10 will be described later.
表示部50は、制御部10による制御によって画像を表示するものであり、例えば液晶パネル等で構成される。本実施形態では、表示部50は、処理後画像IMG2(図4参照)を表示する。
The display unit 50 displays an image under the control of the control unit 10, and is composed of, for example, a liquid crystal panel. In the present embodiment, the display unit 50 displays the processed image IMG2 (see FIG. 4).
記憶部70は、例えば、制御部10が実行する各種の制御プログラム等を記憶するものであり、例えばハードディスク、フラッシュメモリ等の不揮発性の記憶装置によって構成される。記憶部70には、例えば、画像IMG1を示す画像データ等が格納されている。
The storage unit 70 stores, for example, various control programs executed by the control unit 10, and is configured by a non-volatile storage device such as a hard disk or a flash memory. The storage unit 70 stores, for example, image data indicating the image IMG1.
なお、本実施形態では、記憶部70から格納された、静止画としての画像IMG1を示す画像データを制御部10が読み出すものとして説明する。しかし、これに限らず、記憶部70には、動画を示す動画像データが格納されており、制御部10によって当該動画データが読み出されてもよい。この場合、制御部10は、動画を構成する各フレームに対して、以下に述べる画像処理を行ってよい。
In the present embodiment, it is assumed that the control unit 10 reads out image data indicating the image IMG1 as a still image stored from the storage unit 70. However, the present invention is not limited thereto, and moving image data indicating a moving image is stored in the storage unit 70, and the moving image data may be read by the control unit 10. In this case, the control unit 10 may perform image processing described below for each frame constituting the moving image.
また、画像データまたは動画像データは、記憶部70にあらかじめ格納されている必要は必ずしもなく、放送波を受信することにより取得されてもよいし、表示装置100に接続された、画像データまたは動画像データを格納または生成する外部装置(例えば録画装置)から取得されてもよい。
Further, the image data or the moving image data is not necessarily stored in the storage unit 70 in advance, and may be acquired by receiving a broadcast wave, or may be acquired by receiving image data or a moving image connected to the display device 100. You may acquire from the external apparatus (for example, recording device) which stores or produces | generates image data.
また、表示装置100としては、例えば、PC(Personal Computer)、多機能型携帯電話機(スマートフォン)、タブレット等の携帯型情報端末、または、テレビ等が挙げられる。
Further, examples of the display device 100 include a personal information computer such as a PC (Personal Computer), a multi-function mobile phone (smart phone), a tablet, or a television.
上記画像データの種類、および表示装置の適用例については、後述の表示装置100a、200~500についても同様のことがいえる。
The same can be said for the display devices 100a and 200 to 500 described later with respect to the types of image data and the application examples of the display devices.
<制御部10の具体的構成>
制御部10は、表示装置100を統括的に制御するものである。本実施形態では特に、制御部10は、図2に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、関係特定部11、空領域特定部12、実行可否判定部13、および画像生成部14を備えている。 <Specific Configuration ofControl Unit 10>
Thecontrol unit 10 controls the display device 100 in an integrated manner. Particularly in the present embodiment, the control unit 10 has an image processing function for performing a predetermined process on the image IMG1 shown in FIG. 2. The relationship specifying unit 11, the empty region specifying unit 12, and the execution feasibility determining unit 13. And an image generation unit 14.
制御部10は、表示装置100を統括的に制御するものである。本実施形態では特に、制御部10は、図2に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、関係特定部11、空領域特定部12、実行可否判定部13、および画像生成部14を備えている。 <Specific Configuration of
The
関係特定部11は、図2に示す画像IMG1をその上下方向に分割することによって形成された複数の分割領域A1~A3のそれぞれについて、画像IMG1を構成する複数の画素のそれぞれの明度と当該明度を有する画素の数との関係を特定する。
For each of the plurality of divided areas A1 to A3 formed by dividing the image IMG1 shown in FIG. 2 in the vertical direction, the relationship specifying unit 11 determines the brightness of each of the plurality of pixels constituting the image IMG1 and the brightness. The relationship with the number of pixels having
具体的には、関係特定部11は、記憶部70に格納された画像データを読み出し、画像データが示す画像IMG1の上下方向(図2ではY軸方向)を特定する。そして、図2に示すように、その上下方向にあわせて、画像IMG1を複数の分割領域A1~A3に分割する。すなわち、画像IMG1に対して複数の分割領域A1~A3を形成(設定)する。また、本実施形態では、複数の分割領域A1~A3の幅h(上下方向の長さ)は、画像IMG1のY軸方向の幅(長さ)Hをおよそ3等分するように設定される。すなわち、h≒H/3として設定される。もちろんこの幅Hは、実際の画像処理を有効に実施するのに合わせて適宜調整することができる。
Specifically, the relationship specifying unit 11 reads the image data stored in the storage unit 70, and specifies the vertical direction (Y-axis direction in FIG. 2) of the image IMG1 indicated by the image data. Then, as shown in FIG. 2, the image IMG1 is divided into a plurality of divided regions A1 to A3 in accordance with the vertical direction. That is, a plurality of divided areas A1 to A3 are formed (set) for the image IMG1. In this embodiment, the width h (length in the vertical direction) of the plurality of divided regions A1 to A3 is set so as to divide the width (length) H in the Y-axis direction of the image IMG1 into approximately three equal parts. . That is, it is set as h≈H / 3. Of course, the width H can be appropriately adjusted in accordance with effective execution of actual image processing.
一般的に、表示部50に表示される画像において、表示部50において下側部分(-Y軸方向)が近景(景色の手前側)に相当し、上側(+Y軸方向)が遠景(景色の奥側)に相当することが多い。そのため、画像IMG1の上下方向にあわせて画像IMG1を分割して複数の分割領域A1~A3を形成することにより、画像生成部14は、画像の遠近(景色の三次元的な奥行き)を考慮して、自然界における実際の見え方に近づけた処理後画像IMG2を生成することが可能となる。
In general, in the image displayed on the display unit 50, the lower part (−Y axis direction) of the display unit 50 corresponds to the foreground (front side of the landscape), and the upper side (+ Y axis direction) corresponds to the distant view (of the landscape). Often corresponds to the back side. Therefore, by dividing the image IMG1 in the vertical direction of the image IMG1 to form a plurality of divided regions A1 to A3, the image generation unit 14 considers the perspective of the image (the three-dimensional depth of the landscape). Thus, it is possible to generate the processed image IMG2 that is close to the actual appearance in the natural world.
なお、複数の分割領域の幅および数は任意に設定することができる。複数の分割領域の少なくとも一部の幅が他の分割領域の幅と異なっていてもよい。また、複数の分割領域の数は2つであってもよいし、4つ以上であってもよい。
Note that the width and number of the plurality of divided areas can be arbitrarily set. The width of at least a part of the plurality of divided regions may be different from the width of other divided regions. Further, the number of the plurality of divided regions may be two, or may be four or more.
関係特定部11は、画像IMG1を構成する複数の画素のそれぞれが有する階調値(例えば、赤階調値、緑階調値および青階調値の3つの階調値)に基づいて、各画素の明度を算出するとともに、当該明度を有する画素の数を計数する。そして、関係特定部11は、分割領域A1~A3ごとに、算出した明度と計数した画素の数との関係を特定する。関係特定部11は、上記関係を、例えば横軸を明度、縦軸を出現頻度(画素の数)としたグラフ(明度分布)として表現してもよい。上記関係を表現したグラフは、例えば、図3の(b)~(d)に示すようなグラフとなる。関係特定部11は、特定した上記関係を各分割領域A1~A3に対応付けて、特定関係データとして空領域特定部12に送信する。
Based on the gradation values (for example, three gradation values of a red gradation value, a green gradation value, and a blue gradation value) possessed by each of a plurality of pixels constituting the image IMG1, the relationship specifying unit 11 The brightness of the pixel is calculated, and the number of pixels having the brightness is counted. Then, the relationship specifying unit 11 specifies the relationship between the calculated brightness and the number of counted pixels for each of the divided regions A1 to A3. The relationship specifying unit 11 may express the relationship as a graph (brightness distribution) in which the horizontal axis represents lightness and the vertical axis represents appearance frequency (number of pixels), for example. The graph expressing the above relationship is, for example, a graph as shown in (b) to (d) of FIG. The relationship identifying unit 11 associates the identified relationship with each of the divided regions A1 to A3, and transmits the relationship to the empty region identifying unit 12 as specific relationship data.
なお、近年の表示装置の中には、バックライト制御、アクティブガンマ補正、または階調潰れ補正等においてヒストグラムに基づく画像処理を行うものもある。表示装置100が当該画像処理機能を有している場合には、関係特定部11を当該画像処理機能の一部として機能させることもできる。この場合、処理の簡素化を図ることができる。
Note that some recent display devices perform image processing based on a histogram in backlight control, active gamma correction, gradation correction, or the like. When the display device 100 has the image processing function, the relationship specifying unit 11 can also function as part of the image processing function. In this case, the process can be simplified.
空領域特定部12は、画像IMG1において空に相当する空領域Bsを特定する。本実施形態では、空領域特定部12は、関係特定部11が特定した複数の分割領域A1~A3のそれぞれについての上記関係を表現したグラフにおけるピークが示す画素の数が、画像IMG1における分割領域A1(上側に位置する分割領域)から分割領域A3(下側に位置する分割領域)に向かって減少している場合に、上記ピークに対応する画素によって形成される領域を遠景領域として特定する。そして、空領域特定部12は、画像IMG1において遠景領域を特定した場合に、画像IMG1に遠景領域が含まれていると判定し、遠景領域を空領域Bsとして特定する。
The empty area specifying unit 12 specifies an empty area Bs corresponding to the sky in the image IMG1. In the present embodiment, the sky region specifying unit 12 is configured such that the number of pixels indicated by the peak in the graph expressing the relationship for each of the plurality of divided regions A1 to A3 specified by the relationship specifying unit 11 is the divided region in the image IMG1. When the area decreases from A1 (the divided area located on the upper side) toward the divided area A3 (the divided area located on the lower side), the area formed by the pixels corresponding to the peak is specified as the distant view area. Then, when the distant view area is specified in the image IMG1, the sky area specifying unit 12 determines that the distant view area is included in the image IMG1, and specifies the distant view area as the empty area Bs.
ここで、画像IMG1における遠景領域の特定処理(画像IMG1に遠景領域が含まれているか否かの判定処理)の概念について、図3を用いて説明する。図3の(a)~(d)に示すグラフは、任意の表示対象画像における、横軸を明度、縦軸をその出現頻度とした上記関係を示すグラフである。なお、縦軸の値は便宜上付与したものであり、その値に実質的な意味は無い。横軸の値は明度を正規化した値である。また、任意の表示対象画像には、空に相当する領域(図2でいう空領域Bs)と、当該領域以外の対象物群とが含まれているものとする。説明容易化のために、図3の説明において、空に相当する領域を空領域Bsとして説明する。
Here, the concept of the distant view area specifying process in the image IMG1 (determination process for determining whether or not the distant view area is included in the image IMG1) will be described with reference to FIG. The graphs shown in (a) to (d) of FIG. 3 are graphs showing the above-described relationships in a given display target image with the horizontal axis representing the brightness and the vertical axis representing the appearance frequency. The value on the vertical axis is given for convenience, and the value has no substantial meaning. The value on the horizontal axis is a value obtained by normalizing the brightness. Further, it is assumed that an arbitrary display target image includes an area corresponding to the sky (empty area Bs in FIG. 2) and an object group other than the area. For ease of explanation, the area corresponding to the sky will be described as the empty area Bs in the description of FIG.
空領域Bsが青空等の場合、空領域Bsを構成する画素の明度は比較的一定、すなわち所定の範囲内の値となる。そのため、空領域Bsにおける上記関係を示すグラフはナロウとなる傾向にある。図3の(a)の例では、「空」のグラフに示すように、空領域Bsにおける上記関係を示すグラフは、明度の値が0.8付近でピークを有するナロウなグラフである。
When the sky region Bs is a blue sky or the like, the brightness of the pixels constituting the sky region Bs is relatively constant, that is, a value within a predetermined range. Therefore, the graph showing the above relationship in the empty region Bs tends to be narrow. In the example of FIG. 3A, as shown in the “sky” graph, the graph showing the above relationship in the sky region Bs is a narrow graph having a peak when the lightness value is around 0.8.
一方、空領域Bs以外の対象物群は一般に様々な色の組合せとなるため、当該対象物群における上記関係を示すグラフはブロードとなる傾向にある。図3の(a)の例では、「空以外」のグラフに示すように、対象物群における上記関係を示すグラフは、明度の値が0.5付近でピークを有するブロードなグラフである。なお、上記色の組合せによっては、対象物群における上記グラフはピークを複数有する場合もある。
On the other hand, since the object group other than the sky region Bs generally has various color combinations, the graph showing the relationship in the object group tends to be broad. In the example of FIG. 3A, as shown in the “other than sky” graph, the graph showing the above relationship in the object group is a broad graph having a peak when the value of brightness is around 0.5. Depending on the combination of the colors, the graph in the object group may have a plurality of peaks.
この2つのグラフを合成し正規化すると、図3の(b)~(d)に示すようなグラフ(表示対象画像のヒストグラムであり、表示対象画像の明度分布)が得られる。図3の(b)~(d)に示す上記関係を示すグラフは、空領域Bsの比率が互いに異なる表示対象画像を3つ準備し、3つそれぞれについて図3の(a)のような2つのグラフを生成し、当該2つのグラフを合成した結果である。図3の(b)~(d)の順で、表示対象画像における空領域Bsの比率が減少している。図3の(b)~(d)に示すように、上記関係を示すグラフのピークが示す画素、およびピーク近傍の明度を有する画素の数は、空領域Bsの比率が減少するにつれて小さくなっていることがわかる。
When these two graphs are synthesized and normalized, a graph (a histogram of the display target image and a brightness distribution of the display target image) as shown in FIGS. 3B to 3D is obtained. The graphs showing the above relationships shown in (b) to (d) of FIG. 3 prepare three display target images having different ratios of the empty regions Bs, and each of the three display target images shown in (a) of FIG. This is a result of generating two graphs and synthesizing the two graphs. The ratio of the empty region Bs in the display target image decreases in the order of (b) to (d) in FIG. As shown in (b) to (d) of FIG. 3, the number of pixels indicated by the peak of the graph showing the above relationship and pixels having brightness near the peak decreases as the ratio of the sky region Bs decreases. I understand that.
また、上述のとおり、表示対象画像に遠景領域が含まれている場合、表示対象画像において遠景領域が占める割合は、表示対象画像の上側の方が下側と比較して大きくなる。したがって、図3の(b)~(d)に示すような上記ピークが示す画素の数(ピーク近傍の明度を有する画素の数を含めてもよい)の推移を、表示対象画像の上側から下側へ向かって確認することで、表示対象画像における遠景領域を特定、すなわち空領域Bsとしての遠景領域が画像IMG1に含まれているか否かを判定することができる。
As described above, when the display target image includes a distant view area, the ratio of the distant view area to the display target image is larger on the upper side of the display target image than on the lower side. Accordingly, the transition of the number of pixels indicated by the peak as shown in (b) to (d) of FIG. 3 (which may include the number of pixels having brightness near the peak) is changed from the upper side of the display target image to the lower side. By checking toward the side, it is possible to specify a distant view area in the display target image, that is, whether or not the distant view area as the sky area Bs is included in the image IMG1.
空領域特定部12は、関係特定部11が特定した各分割領域A1~A3の上記関係を表現したグラフにおけるピークが示す画素の数(ピーク近傍の明度を有する画素の数を含めてもよい)を特定する。そして、空領域特定部12は、分割領域A1~A3における当該画素の数がその順に減少している場合に、当該ピークおよびピーク近傍の明度を有する画素によって形成される領域を遠景領域として特定し、画像IMG1に空領域Bsとしての遠景領域が含まれていると判定する。すなわち、空領域特定部12は、上記ピークおよびピーク近傍の明度を有する画素を、遠景領域を構成する画素として特定する。空領域特定部12は、例えば、ピーク(画素の数の最大値)に対応する明度から所定の範囲(例えば、正規化した後の明度で±0.1の範囲)内の明度を有する画素を、遠景領域を構成する画素として特定する。
The sky region specifying unit 12 includes the number of pixels indicated by the peak in the graph expressing the above relationships of the divided regions A1 to A3 specified by the relationship specifying unit 11 (may include the number of pixels having brightness near the peak). Is identified. Then, when the number of the pixels in the divided regions A1 to A3 decreases in that order, the sky region specifying unit 12 specifies the region formed by the pixels having the lightness near the peak and the peak as the distant view region. Then, it is determined that the image IMG1 includes a distant view area as the sky area Bs. That is, the sky area specifying unit 12 specifies the pixels having the peak and the brightness near the peak as pixels constituting the distant view area. For example, the sky region specifying unit 12 selects pixels having a lightness within a predetermined range (for example, a range of ± 0.1 in lightness after normalization) from the lightness corresponding to the peak (maximum value of the number of pixels). , It is specified as a pixel constituting a distant view area.
なお、空領域特定部12は、各分割領域A1~A3の各グラフにおいてピークを複数特定した場合には、複数のピークの少なくとも1つが上記のように減少していれば、画像IMG1に空領域Bsが含まれると判定する。
Note that, when the plurality of peaks are specified in each graph of each of the divided regions A1 to A3, the sky region specifying unit 12 adds an empty region to the image IMG1 if at least one of the plurality of peaks decreases as described above. It is determined that Bs is included.
空領域特定部12は、画像IMG1に空領域Bsが含まれると判定した場合、空領域Bsを構成する画素を示す画素データを実行可否判定部13に送信する。
When it is determined that the image IMG1 includes the empty region Bs, the empty region specifying unit 12 transmits pixel data indicating pixels constituting the empty region Bs to the execution determination unit 13.
ここで、一般の表示装置において、画像IMG1における遠近を判定することは困難である。画像IMG1自体は、画像IMG1に含まれる対象物が何であるか、対象物の大きさがどの程度であるか等の情報を有していないためである。そのため、一般の表示装置では、データベースの検索または画像IMG1に付随するメタデータの解析に基づき上記情報を取得する。
Here, in a general display device, it is difficult to determine the perspective in the image IMG1. This is because the image IMG1 itself does not have information such as what the object included in the image IMG1 is and how large the object is. Therefore, in a general display device, the above information is acquired based on a database search or analysis of metadata accompanying the image IMG1.
一方、表示装置100は、ぼかし処理の対象として空領域Bsを特定できればよい。すなわち、表示装置100は、遠景領域が空領域Bsであることを前提として、遠景領域を特定することが可能である。そのため、表示装置100では、空領域特定部12は関係特定部11が特定した上記関係(当該関係を表現したグラフ)を用いるのみという簡易な手法で、遠景領域を特定することができる。すなわち、簡易な手法で画像IMG1における遠近判定を行うことができる。これは、後述の本実施形態の変形例1の処理についても同様のことがいえる。
On the other hand, the display device 100 only needs to be able to specify the empty region Bs as the target of the blurring process. In other words, the display device 100 can specify the distant view area on the assumption that the distant view area is the sky area Bs. Therefore, in the display device 100, the sky region specifying unit 12 can specify the distant view region by a simple method that only uses the relationship specified by the relationship specifying unit 11 (a graph expressing the relationship). That is, the perspective determination in the image IMG1 can be performed by a simple method. The same can be said for the processing of Modification 1 of the present embodiment described later.
実行可否判定部13は、空領域特定部12によって特定された空領域Bsの色に基づいて、空領域Bsに対するぼかし処理を行うか否かを判定する。本実施形態では、実行可否判定部13は、空領域特定部12が特定した画素の色に特定の色が含まれ、かつ遠景領域における特定の色を有する画素の数が所定の数以上である場合に、空領域Bsに対するぼかし処理を行うと判定する。
The executability determination unit 13 determines whether or not to perform the blurring process on the sky region Bs based on the color of the sky region Bs specified by the sky region specifying unit 12. In the present embodiment, the feasibility determination unit 13 includes a specific color in the color of the pixel specified by the sky region specifying unit 12 and the number of pixels having the specific color in the distant view region is a predetermined number or more. In this case, it is determined that the blurring process is performed on the sky region Bs.
特定の色は、空領域Bsが例えば青空の色(本実施形態では空色と称す)のような、空領域Bsの色を面色と判断できる色を指す。また、所定の数は、空領域Bsが空の広がりを有していると判定できる程度の画素の数を指す。すなわち、実行可否判定部13は、空と規定できる程度の広さを有する空領域Bsの画素が空色を示す場合、空領域Bsが面色であると判定して、空領域Bsに対してぼかし処理を行うと判定する。
The specific color refers to a color in which the color of the sky region Bs can be determined as a surface color, such as a color of the sky region Bs of blue sky (referred to as sky blue in the present embodiment). The predetermined number refers to the number of pixels that can be determined that the sky region Bs has a sky spread. In other words, when the pixel of the sky region Bs having a size that can be defined as sky indicates a sky blue color, the feasibility determination unit 13 determines that the sky region Bs is a surface color and blurs the sky region Bs. Is determined to be performed.
実行可否判定部13は、空色であるか否かについて、例えば、PCCS(日本色研配色体系;PracticalColor Co-ordinate System)の24色相環を用いて判定する。実行可否判定部13は、空領域Bsを構成する画素の色が、例えば24色相環における15:BG、16:gBまたは17:Bである場合、当該画素の色を空色と判定する。この場合、特定の色は、15:BG、16:gBまたは17:Bである。
The execution determination unit 13 determines whether or not the color is sky blue using, for example, a 24-color ring of PCCS (Practical Color Co-ordinate System). When the color of the pixel constituting the sky region Bs is, for example, 15: BG, 16: gB, or 17: B in the 24-color circle, the executability determination unit 13 determines that the color of the pixel is a sky blue. In this case, the specific color is 15: BG, 16: gB or 17: B.
画像IMG1に青空が含まれているか否かを判定するために、データベースまたは深層学習等を用いる方法もあるが、これらを実装するのは容易ではない。一方、本実施形態では、上記のように空色か否かを判定するだけで、画像IMG1に青空が含まれているか否かを判定できる。すなわち、本実施形態では、当該判定を容易に行うことができる。
There is a method of using a database or deep learning to determine whether the image IMG1 contains a blue sky, but it is not easy to implement them. On the other hand, in the present embodiment, it is possible to determine whether or not the image IMG1 includes a blue sky only by determining whether or not the color is sky blue as described above. That is, in the present embodiment, the determination can be easily performed.
なお、特定の色は上記に限られるものではなく、空領域Bsの色を面色と判定できる色が特定の色として選択されていればよい。また、特定の色をPCCSの24色相環を用いて設定する必要は無く、例えば、6、12、または20種類に分割された色相群から特定の色が選択されてもよい。
It should be noted that the specific color is not limited to the above, and a color that can determine the color of the sky region Bs as the surface color may be selected as the specific color. Further, there is no need to set a specific color using the 24-color circle of PCCS, and for example, a specific color may be selected from a hue group divided into 6, 12, or 20 types.
また、実行可否判定部13は、複数の分割領域A1~A3のうち、分割領域A1(上部の分割領域)に含まれる空領域Bs1と、分割領域A1に隣接する分割領域A2(下部の分割領域)に含まれる空領域Bs2との接触領域が所定量未満である場合には、分割領域A1に含まれる空領域Bs1のみぼかし処理を行う対象としてもよい。すなわち、この場合、実行可否判定部13は、一旦空領域Bs2に対してぼかし処理を行うと判定した場合であっても、空領域Bs2に対してはぼかし処理を行わないものと判定する。なお、図2では、画像IMG1において、空領域Bsは、空領域Bs1およびBs2から構成される。
The execution determination unit 13 also includes an empty area Bs1 included in the divided area A1 (upper divided area) among the divided areas A1 to A3, and a divided area A2 (lower divided area) adjacent to the divided area A1. ) Included in the empty area Bs2 included in the divided area A1 may be subject to blurring processing. That is, in this case, even if it is determined that the blurring process is once performed on the sky area Bs2, the execution possibility determination unit 13 determines that the blurring process is not performed on the sky area Bs2. In FIG. 2, in the image IMG1, the sky region Bs is composed of sky regions Bs1 and Bs2.
具体的には、実行可否判定部13は、画像IMG1において、分割領域A1と分割領域A2との境界において、空領域Bsが一続きであるか否かを判定する。実行可否判定部13は、分割領域A1に含まれる空領域Bs1における上記境界近傍の画素を特定するとともに、分割領域A2に含まれる空領域Bs2における上記境界近傍の画素を特定する。実行可否判定部13は、これらの画素が上記境界において隣接する領域を接触領域と特定し、接触領域内において隣接する画素の数が所定の数以上あるか、または上記境界近傍の全体の画素の数に対して所定の割合以上あるか否かを判定する。実行可否判定部13は、当該画素の数が所定の数または所定の割合以上あると判定した場合には、空領域Bs2が空領域Bs1と一続きであると判定し、空領域Bs2についてもぼかし処理の対象とする。一方、所定の数または所定の割合未満であると判定した場合には、空領域Bs2が空領域Bs1と一続きではないと判定し、空領域Bs2をぼかし処理の対象とせず(または、ぼかし処理の対象から外し)、空領域Bs1のみをぼかし処理の対象とする。図2の例では、点線の枠囲み部分が接触領域として特定される領域である。
Specifically, the execution determination unit 13 determines whether or not the empty region Bs is continuous at the boundary between the divided region A1 and the divided region A2 in the image IMG1. The executability determination unit 13 specifies pixels near the boundary in the empty region Bs1 included in the divided region A1, and specifies pixels near the boundary in the empty region Bs2 included in the divided region A2. The feasibility determination unit 13 identifies an area where these pixels are adjacent at the boundary as a contact area, and the number of adjacent pixels within the contact area is greater than or equal to a predetermined number, or all of the pixels near the boundary. It is determined whether or not there is a predetermined ratio or more with respect to the number. If it is determined that the number of pixels is equal to or greater than a predetermined number or a predetermined ratio, the feasibility determination unit 13 determines that the empty region Bs2 is continuous with the empty region Bs1, and blurs the empty region Bs2 as well. Target of processing. On the other hand, when it is determined that the number is less than the predetermined number or the predetermined ratio, it is determined that the empty area Bs2 is not continuous with the empty area Bs1, and the empty area Bs2 is not set as the target of the blurring process (or the blurring process). Only the empty area Bs1 is set as the target of the blurring process. In the example of FIG. 2, the dotted framed area is an area specified as the contact area.
空領域Bsと判定され、空領域Bsを構成する画素が空色であると判定された場合であっても、当該空領域Bsが実際には空とは異なる対象物である可能性もある。この場合、当該対象物は、その実態を明確に視認できるものであるため、当該対象物に対してぼかし処理を行うことは適切ではない。
Even if it is determined that the region is the sky region Bs and the pixels forming the sky region Bs are determined to be sky blue, the sky region Bs may actually be an object different from the sky. In this case, since the target object can clearly visually recognize the actual state, it is not appropriate to perform the blurring process on the target object.
上述のように、画像IMG1の上側ほど遠景領域であることが多い。そのため、上部(上側)の分割領域から下部(下側)の分割領域に向かって空領域Bsが一続きになっているか否かを判定することにより、下部の分割領域に含まれる空領域Bs2も、空領域Bs1と同様に空を示すものであるか否かを判定することができる。つまり、一続きでないと判定した場合には、空領域Bs2は、実際には空以外の対象物を示すものと判定することができ、当該対象物に対してはぼかし処理を行わないようにすることができる。これにより、上記のような適切でないぼかし処理を行うことを防止することができる。
As described above, the upper side of the image IMG1 is often a distant view area. Therefore, by determining whether or not the empty area Bs is continuous from the upper (upper) divided area toward the lower (lower) divided area, the empty area Bs2 included in the lower divided area is also determined. It can be determined whether or not the sky region Bs1 indicates sky. In other words, if it is determined that it is not continuous, it can be determined that the sky region Bs2 actually indicates an object other than the sky, and the blur processing is not performed on the object. be able to. As a result, it is possible to prevent the above-described inappropriate blurring process from being performed.
なお、上記所定量(所定の数または所定の割合)は、上記一続きであると判定できる程度の値に設定されていればよい。例えば、上記所定量が所定の割合である場合、当該所定量を例えば20%と設定することができる。具体例としては、上記境界を挟む上下の画素がともに空色である画素対の数が、表示横画素数(画像IMG1の横方向(X軸方向)に一列に並ぶ全画素数)の20%を超えるとき、空領域Bs1およびBs2が一続きであると判定することができる。
It should be noted that the predetermined amount (predetermined number or predetermined ratio) may be set to a value that can be determined to be the continuous. For example, when the predetermined amount is a predetermined ratio, the predetermined amount can be set to 20%, for example. As a specific example, the number of pixel pairs in which the upper and lower pixels across the boundary are both sky blue is 20% of the display horizontal pixel number (the total number of pixels aligned in the horizontal direction (X-axis direction) of the image IMG1). When exceeding, it can be determined that the empty regions Bs1 and Bs2 are continuous.
また、上記では、上部の分割領域を分割領域A1、下部の分割領域を分割領域A2として説明したが、上部の分割領域を分割領域A2、下部の分割領域を分割領域A3とした場合も上記と同様に説明することができる。
In the above description, the upper divided area is the divided area A1, and the lower divided area is the divided area A2. However, the upper divided area is the divided area A2, and the lower divided area is the divided area A3. The same can be explained.
実行可否判定部13は、空領域Bsに対してぼかし処理を行うか否かの判定結果を画像生成部14に送信する。
The execution availability determination unit 13 transmits to the image generation unit 14 the determination result as to whether or not to perform the blurring process on the sky region Bs.
画像生成部14は、実行可否判定部13によってぼかし処理を行うと判定された場合に、空領域Bsに対してぼかし処理を行うことにより処理後画像IMG2を生成する。すなわち、画像生成部14は、図4に示す処理後画像IMG2を生成する処理を行う。そして、画像生成部14は、生成した処理後画像IMG2を示す画像データを表示部50に送信することにより、表示部50に処理後画像IMG2を表示する。
The image generation unit 14 generates the post-processing image IMG2 by performing the blurring process on the sky region Bs when the execution determination unit 13 determines to perform the blurring process. That is, the image generation unit 14 performs a process of generating the post-processing image IMG2 illustrated in FIG. Then, the image generation unit 14 displays the processed image IMG2 on the display unit 50 by transmitting image data indicating the generated processed image IMG2 to the display unit 50.
ぼかし処理は、例えばフィルタ(例:ローパスフィルタ)を用いた公知の手法により実現される。例えば、3×3のマトリクス(平均マトリクス)から構成されるローパスフィルタを適用する処理が挙げられる。例えば、3×3のマトリクスは、中央の値が0でそれ以外の領域の値1/8、または中央の値が1/2でそれ以外の領域の値1/16であってよい。これに限らず、単純平均のマトリクスを含め、種々のローパスフィルタをぼかし処理に適用することができる。すなわち、表示装置100が処理後画像IMG2を表示したときに、空領域Bsにおいて明確なエッジおよびテクスチャを視認できない程度にぼかし処理が行われていればよい。
The blurring process is realized by a known method using, for example, a filter (for example, a low-pass filter). For example, the process which applies the low-pass filter comprised from a 3x3 matrix (average matrix) is mentioned. For example, a 3 × 3 matrix may have a central value of 0 and a value of 1/8 in other regions, or a central value of 1/2 and a value of 1/16 in other regions. The present invention is not limited to this, and various low-pass filters including a simple average matrix can be applied to the blurring process. That is, it is only necessary that the blurring process is performed to such an extent that a clear edge and texture cannot be visually recognized in the sky region Bs when the display device 100 displays the processed image IMG2.
ローパスフィルタのマトリクスサイズは、表示装置100の用途、表示装置100の回路規模、または表示部50の解像度を考慮して適切なサイズが選択されればよいが、例えば5×5から15×15程度の大きさであってよい。なお、本発明者らの検討によると、ハイビジョンでは5×5程度、4Kでは9×9程度、8Kでは13×13程度のローパスフィルタを空領域Bsのぼかし処理に適用することにより、空領域Bsが実際の見え方(自然な見え方)に近づくことを確認している。
The matrix size of the low-pass filter may be selected appropriately in consideration of the application of the display device 100, the circuit scale of the display device 100, or the resolution of the display unit 50. For example, the matrix size is about 5 × 5 to 15 × 15. It may be the size of According to the study by the present inventors, by applying a low-pass filter of about 5 × 5 for high vision, about 9 × 9 for 4K, and about 13 × 13 for 8K, the sky region Bs is applied. Has been confirmed to approach the actual appearance (natural appearance).
本実施形態でのローパスフィルタの適用は、空領域Bsにおけるノイズ除去ではなく、空領域Bsを明確にぼかす(ぼやけさせる)ことが目的である。3×3のローパスフィルタでもぼかし処理は可能であるが、より明確にぼかすという目的を達成するためには、上記のように3×3よりも大きいサイズのローパスフィルタを適用することが好ましいといえる。
The application of the low-pass filter in the present embodiment is not for noise removal in the sky region Bs but for clearly blurring (blurring) the sky region Bs. Although blurring can be performed with a 3 × 3 low-pass filter, it is preferable to apply a low-pass filter having a size larger than 3 × 3 as described above in order to achieve the purpose of blurring more clearly. .
また、空領域Bsに、空領域Bsと識別可能な対象物(例:鳥)が含まれている場合に、画像生成部14は、空領域Bsの全体に対してぼかし処理を実行することにより、当該対象物のエッジもあわせてぼかしてよい。空において現実に観察される対象物は、エッジもテクスチャも有しているため明確に視認することができるが、その位置(距離感)などについては、空(背景)になじんでその明確さは減少する。上記のように対象物のエッジもあわせてぼかすことにより、処理後画像IMG2において、実際の対象物の見え方に近づけることができる。なお、本発明者らは、上記対象物についてエッジを強調したところ、対象物が表示面から浮き出るように見え、自然な空の見え方が損なわれることを確認している。
In addition, when the sky region Bs includes an object (eg, a bird) that can be identified from the sky region Bs, the image generation unit 14 performs blurring processing on the entire sky region Bs. The edge of the object may also be blurred. The object actually observed in the sky can be clearly seen because it has both edges and texture, but its position (feeling of distance) is familiar with the sky (background) and its clarity is Decrease. By blurring the edges of the object as described above, it is possible to approximate the appearance of the actual object in the processed image IMG2. The present inventors have confirmed that when the edge of the object is emphasized, the object appears to be raised from the display surface, and the natural appearance of the sky is impaired.
画像生成部14が図2に示す画像IMG1から生成する処理後画像IMG2は、図4のとおりとなる。なお、図2の空領域Bsの色は、そのほとんどの領域において空色であり、分割領域A1およびA2における空領域Bsの接触領域は所定量以上であるものとする。
The post-processing image IMG2 generated by the image generation unit 14 from the image IMG1 shown in FIG. 2 is as shown in FIG. Note that the color of the sky region Bs in FIG. 2 is sky blue in most of the regions, and the contact region of the sky region Bs in the divided regions A1 and A2 is a predetermined amount or more.
図4の例では、空領域Bsに対してぼかし処理が行われている。また、(i)空領域Bsと対象物Obj1(木)とのエッジ、(ii)空領域Bsと対象物Obj3(山)とのエッジについてもぼかし処理が行われている。一方で、上記(i)および(ii)以外の領域(対象物Obj1・Obj3の一部、対象物Obj2(家)および対象物Obj4(花)を含む)については、画像IMG1のままである。すなわち、画像IMG1に対して、面色と判定した領域についてはエッジおよびテクスチャが視認できないようにぼかし処理を行い、面色でないと判定した領域についてはエッジおよびテクスチャを保持している。また、対象物Obj1およびObj3の内部についてはテクスチャが保持されている。
In the example of FIG. 4, the blurring process is performed on the empty area Bs. Also, blurring processing is performed on (i) the edge between the sky region Bs and the object Obj1 (tree), and (ii) the edge between the sky region Bs and the object Obj3 (mountain). On the other hand, the region other than the above (i) and (ii) (including a part of the objects Obj1 and Obj3, the object Obj2 (house), and the object Obj4 (flower)) remains the image IMG1. That is, the image IMG1 is subjected to a blurring process so that the edge and texture cannot be visually recognized in the area determined to be the face color, and the edge and texture are held for the area determined not to be the face color. Further, the textures are held inside the objects Obj1 and Obj3.
<表示装置100における処理>
次に、図5を用いて、表示装置100における処理(画像処理装置の制御方法)の一例について説明する。図5は、表示装置100における処理の一例を示すフローチャートである。 <Processing inDisplay Device 100>
Next, an example of processing (control method of the image processing apparatus) in thedisplay apparatus 100 will be described with reference to FIG. FIG. 5 is a flowchart illustrating an example of processing in the display device 100.
次に、図5を用いて、表示装置100における処理(画像処理装置の制御方法)の一例について説明する。図5は、表示装置100における処理の一例を示すフローチャートである。 <Processing in
Next, an example of processing (control method of the image processing apparatus) in the
まず、関係特定部11は、記憶部70から画像データを読み出し(S1)、画像IMG1の上下方向にあわせて、画像IMG1を複数の分割領域A1~A3に分割する。関係特定部11は、分割した各分割領域A1~A3において上記関係を特定する(S2)。
First, the relationship specifying unit 11 reads image data from the storage unit 70 (S1), and divides the image IMG1 into a plurality of divided regions A1 to A3 in accordance with the vertical direction of the image IMG1. The relationship specifying unit 11 specifies the relationship in each of the divided areas A1 to A3 (S2).
空領域特定部12は、関係特定部11が特定した各分割領域A1~A3における上記関係を表現したグラフのピークを示す画素の数が、分割領域A1から分割領域A3に向かって減少している場合に、ピークに対応する画素によって形成される領域を遠景領域として特定する。すなわち、空領域特定部12は、画像IMG1に遠景領域が含まれていると判定し、当該遠景領域を空領域Bsと特定する(S3;空領域特定工程)。空領域特定部12は、上記ピークに対応する画素を、空領域Bsを構成する画素として特定し(S4)、当該画素が空色で、かつ所定の数以上存在するか否かを判定する(S5;実行可否判定工程)。
In the sky region specifying unit 12, the number of pixels indicating the peak of the graph expressing the above relationship in each of the divided regions A1 to A3 specified by the relationship specifying unit 11 decreases from the divided region A1 toward the divided region A3. In this case, an area formed by the pixels corresponding to the peak is specified as a distant view area. In other words, the sky region specifying unit 12 determines that the distant view region is included in the image IMG1, and specifies the distant view region as the sky region Bs (S3; sky region specifying step). The sky region specifying unit 12 specifies the pixel corresponding to the peak as a pixel constituting the sky region Bs (S4), and determines whether or not the pixel is sky blue and exists in a predetermined number or more (S5). ; Executability determination step).
S5でYESの場合、実行可否判定部13は、分割領域A1およびA2における空領域Bsが一続きであるか否かを判定する(S6;実行可否判定工程)。具体的には、実行可否判定部13は、分割領域A1およびA2における接触領域が所定量以上存在するか否かを判定する。一方、S5でNOの場合、画像生成部14は、画像IMG1に対するぼかし処理を行わず、当該画像IMG1を表示部50に表示する(S8)。
If YES in S5, the execution determination unit 13 determines whether or not the empty areas Bs in the divided areas A1 and A2 are continuous (S6; execution determination process). Specifically, the feasibility determination unit 13 determines whether or not the contact areas in the divided areas A1 and A2 exist in a predetermined amount or more. On the other hand, if NO in S5, the image generation unit 14 displays the image IMG1 on the display unit 50 without performing the blurring process on the image IMG1 (S8).
S6でYESの場合、実行可否判定部13は、空領域Bsの全体に対してぼかし処理を行うと判定する。その判定結果を受けて、画像生成部14は、空領域Bsの全体に対してぼかし処理を行うことにより、処理後画像IMG2を生成する(S7;画像生成工程)。画像生成部14は、この処理後画像IMG2を表示部50に表示する(S8)。
In the case of YES in S6, the feasibility determination unit 13 determines to perform the blurring process on the entire empty area Bs. In response to the determination result, the image generation unit 14 generates a processed image IMG2 by performing blurring processing on the entire sky region Bs (S7; image generation step). The image generation unit 14 displays the processed image IMG2 on the display unit 50 (S8).
一方、S6でNOの場合、実行可否判定部13は、空領域Bsのうち、分割領域A1に含まれる空領域Bs1に対してのみぼかし処理を行うと判定する。その判定結果を受けて、画像生成部14は、空領域Bs1に対してのみぼかし処理を行い、処理後画像IMG2(図4の例に示す処理後画像IMG2とは異なる画像)を生成する(S9;画像生成工程)。画像生成部14は、この処理後画像IMG2を表示部50に表示する(S8)。
On the other hand, in the case of NO in S6, the feasibility determination unit 13 determines that the blurring process is performed only on the empty area Bs1 included in the divided area A1 in the empty area Bs. In response to the determination result, the image generation unit 14 performs the blurring process only on the sky region Bs1, and generates a processed image IMG2 (an image different from the processed image IMG2 illustrated in the example of FIG. 4) (S9). Image generation step). The image generation unit 14 displays the processed image IMG2 on the display unit 50 (S8).
なお、S6において、分割領域A2およびA3に空領域Bsが存在する場合、分割領域A2およびA3の間で当該空領域Bsが一続きであるか否かが判定され、S6でNOであれば、分割領域A3に含まれる空領域Bsに対するぼかし処理は行われない。また、分割領域A1およびA2の間で空領域Bsが一続きでないと判定された場合には、分割領域A2およびA3の間で空領域Bsが一続きであると判定された場合であっても、分割領域A2だけでなく分割領域A3についてもぼかし処理が行われない。
In S6, when there is an empty area Bs in the divided areas A2 and A3, it is determined whether or not the empty area Bs is continuous between the divided areas A2 and A3. If NO in S6, The blurring process is not performed on the empty area Bs included in the divided area A3. Further, when it is determined that the empty area Bs is not continuous between the divided areas A1 and A2, even when it is determined that the empty area Bs is continuous between the divided areas A2 and A3. The blurring process is not performed not only on the divided area A2 but also on the divided area A3.
<変形例1>
(表示装置100aの構成)
次に、図6を用いて、本実施形態の変形例である表示装置100aについて説明する。図6は、表示装置100aの構成の一例を示す図である。 <Modification 1>
(Configuration ofdisplay device 100a)
Next, adisplay device 100a that is a modification of the present embodiment will be described with reference to FIG. FIG. 6 is a diagram illustrating an example of the configuration of the display device 100a.
(表示装置100aの構成)
次に、図6を用いて、本実施形態の変形例である表示装置100aについて説明する。図6は、表示装置100aの構成の一例を示す図である。 <
(Configuration of
Next, a
表示装置100aは、表示装置100aを統括的に制御する制御部10a(画像処理装置)を備える。制御部10aは、図2に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、画素計数部21、空領域特定部12a、実行可否判定部13a、および画像生成部14を備える。すなわち、制御部10aは、主として、関係特定部11の代わりに画素計数部21を備えた点で制御部10と異なる。これは、空領域Bsの特定方法が制御部10の方法とは異なることを意味する。
The display device 100a includes a control unit 10a (image processing device) that comprehensively controls the display device 100a. The control unit 10a has an image processing function for performing predetermined processing on the image IMG1 shown in FIG. 2, and includes a pixel counting unit 21, a sky region specifying unit 12a, an execution availability determination unit 13a, and an image generation unit 14. Is provided. That is, the control unit 10 a is different from the control unit 10 mainly in that a pixel counting unit 21 is provided instead of the relationship specifying unit 11. This means that the method of specifying the empty area Bs is different from the method of the control unit 10.
画素計数部21は、分割領域A1~A3のそれぞれについて、画像IMG1を構成する複数の画素の色のそれぞれを、複数の所定の色のいずれかに分類する。複数の所定の色としては、例えば、PCCSの24色相環における24色を用いることができる。上述のとおり、複数の所定の色は24色に限られない。
The pixel counting unit 21 classifies each of the plurality of pixels constituting the image IMG1 into one of a plurality of predetermined colors for each of the divided regions A1 to A3. As the plurality of predetermined colors, for example, 24 colors in the 24-color ring of PCCS can be used. As described above, the plurality of predetermined colors is not limited to 24 colors.
画素計数部21は、記憶部70に格納された画像データを読み出し、関係特定部11と同様、図2に示すように、その上下方向にあわせて、画像IMG1を複数の分割領域A1~A3に分割する。また、画素計数部21は、画像IMG1を構成する複数の画素のそれぞれが有する階調値に基づいて、各画素の色を算出する。画素計数部21は、算出した各画素の色を、分割領域A1~A3ごとに複数の所定の色のいずれかに分類する。そして、画素計数部21は、所定の色ごとに、分類した画素の数(出現頻度)を計数する。画素計数部21は、所定の色ごとに対応付けられた画素の数を示す画素数データを空領域特定部12aに送信する。
The pixel counting unit 21 reads the image data stored in the storage unit 70 and, like the relationship specifying unit 11, the image IMG1 is divided into a plurality of divided regions A1 to A3 in accordance with the vertical direction as shown in FIG. To divide. In addition, the pixel counting unit 21 calculates the color of each pixel based on the gradation value that each of the plurality of pixels constituting the image IMG1 has. The pixel counting unit 21 classifies the calculated color of each pixel into one of a plurality of predetermined colors for each of the divided regions A1 to A3. The pixel counting unit 21 counts the number of classified pixels (appearance frequency) for each predetermined color. The pixel counting unit 21 transmits pixel number data indicating the number of pixels associated with each predetermined color to the sky region specifying unit 12a.
空領域特定部12aは、空領域特定部12と同様、画像IMG1において空領域Bsを特定する。本変形例では、空領域特定部12aは、所定の色のうちのいずれかの色において、分割領域A1(上側に位置する分割領域)から分割領域A3(下側に位置する分割領域)に向かって画素計数部21が計数した画素の数が減少している場合に、当該数が減少している画素によって形成される領域を遠景領域として特定する。そして、空領域特定部12aは、画像IMG1に、空領域Bsとしての遠景領域が含まれていると判定する。空領域特定部12aは、画像IMG1に空領域Bsが含まれていると判定した場合、画素の数が減少している所定の色(遠景領域を構成する画素に対応する所定の色)を示す色データを実行可否判定部13aに送信する。
The empty area specifying unit 12a specifies the empty area Bs in the image IMG1 similarly to the empty area specifying unit 12. In the present modification, the sky area specifying unit 12a moves from the divided area A1 (the divided area located on the upper side) to the divided area A3 (the divided area located on the lower side) for any one of the predetermined colors. When the number of pixels counted by the pixel counting unit 21 is reduced, an area formed by the pixels having the reduced number is specified as a distant view area. Then, the sky area specifying unit 12a determines that the image IMG1 includes a distant view area as the sky area Bs. When the sky area specifying unit 12a determines that the image IMG1 includes the sky area Bs, the sky area specifying unit 12a indicates a predetermined color in which the number of pixels is reduced (a predetermined color corresponding to the pixels constituting the distant view area). The color data is transmitted to the execution determination unit 13a.
実行可否判定部13aは、実行可否判定部13と同様、空領域Bsの色に基づいて、空領域Bsに対するぼかし処理を行うか否かを判定する。本変形例では、実行可否判定部13aは、画素の数が減少している所定の色が空色である場合に、当該所定の色を示す画素が空領域Bsを構成するものと特定し、当該画素を含む空領域Bsに対するぼかし処理を行うと判定する。所定の色が空色であるか否かの判定は、上述と同様、所定の色が例えば24色相環における15:BG、16:gBまたは17:Bであるか否かによって行う。そして、実行可否判定部13aは、空領域Bsに対してぼかし処理を行うか否かの判定結果を画像生成部14に送信する。
Similar to the execution availability determination unit 13, the execution availability determination unit 13a determines whether to perform the blurring process on the sky region Bs based on the color of the sky region Bs. In the present modification, when the predetermined color in which the number of pixels is reduced is a sky blue, the feasibility determination unit 13a identifies that the pixel indicating the predetermined color constitutes the sky region Bs, and It is determined that the blurring process is performed on the sky region Bs including the pixel. Whether or not the predetermined color is sky blue is determined based on whether or not the predetermined color is, for example, 15: BG, 16: gB, or 17: B in the 24-color circle, as described above. Then, the feasibility determination unit 13a transmits to the image generation unit 14 a determination result as to whether or not to perform the blurring process on the sky region Bs.
つまり、本変形例では、実行可否判定部13aの判定処理で必要となる空領域Bs(当該判定処理の対象となり得る箇所)の色およびその係数を示すデータが、空領域特定部12aにおいてすでに評価されているといえる。そのため、実行可否判定部13aは、空領域特定部12aにより特定され得る空色を示す空領域Bsを、ぼかし処理の実行可能領域としてそのまま決定することができる。
In other words, in the present modification, the color indicating the color of the empty area Bs (location that can be the target of the determination process) and the coefficient thereof necessary for the determination process of the execution determination unit 13a is already evaluated in the empty area specifying unit 12a. It can be said that. Therefore, the feasibility determining unit 13a can determine the sky region Bs indicating the sky color that can be specified by the sky region specifying unit 12a as the executable region of the blurring process.
このように、表示装置100aでは、空領域Bsの特定において画素の色を評価する。そのため、表示装置100のように、上記関係に基づき空領域Bsを特定してから、空領域Bsの画素の色を評価するという2段階のステップを行う必要が無い。従って、表示装置100aは、表示装置100よりも簡易に処理を行うことができる。
Thus, in the display device 100a, the color of the pixel is evaluated in specifying the sky region Bs. Therefore, unlike the display device 100, it is not necessary to perform a two-step step of evaluating the color of the pixels in the sky region Bs after specifying the sky region Bs based on the above relationship. Therefore, the display device 100a can perform processing more easily than the display device 100.
(表示装置100aにおける処理)
次に、図7を用いて、表示装置100aにおける処理(画像処理装置の制御方法)の一例について説明する。図7は、表示装置100aにおける処理の一例を示すフローチャートである。 (Processing indisplay device 100a)
Next, an example of processing (a control method of the image processing apparatus) in thedisplay device 100a will be described with reference to FIG. FIG. 7 is a flowchart illustrating an example of processing in the display device 100a.
次に、図7を用いて、表示装置100aにおける処理(画像処理装置の制御方法)の一例について説明する。図7は、表示装置100aにおける処理の一例を示すフローチャートである。 (Processing in
Next, an example of processing (a control method of the image processing apparatus) in the
まず、画素計数部21は、記憶部70から画像データを読み出し(S11)、画像IMG1を複数の分割領域A1~A3に分割する。画素計数部21は、算出した各画素の色を、分割領域A1~A3ごとに、複数の所定の色のいずれかに分類し、所定の色ごとに画素の数を計数する(S12)。
First, the pixel counting unit 21 reads image data from the storage unit 70 (S11), and divides the image IMG1 into a plurality of divided regions A1 to A3. The pixel counting unit 21 classifies the calculated color of each pixel into one of a plurality of predetermined colors for each of the divided areas A1 to A3, and counts the number of pixels for each predetermined color (S12).
空領域特定部12aは、複数の所定の色のいずれかにおいて、分割領域A1から分割領域A3に向かって画素の数が減少している場合に、当該数が減少している画素によって形成される領域を遠景領域として特定する。すなわち、空領域特定部12aは、画像IMG1に遠景領域が含まれていると判定し、当該遠景領域を空領域Bsと特定する(S13;空領域特定工程)。実行可否判定部13aは、画素の数が減少している所定の色が空色であるか否かを判定する(S14;実行可否判定工程)。S14でYESの場合、S6の処理に移行する。一方、S14でNOの場合、画像生成部14は、画像IMG1に対するぼかし処理を行わず、当該画像IMG1を表示部50に表示する(S8)。
The sky area specifying unit 12a is formed by pixels having a reduced number when the number of pixels decreases from the divided area A1 toward the divided area A3 in any of a plurality of predetermined colors. The area is specified as a distant view area. That is, the sky area specifying unit 12a determines that the image IMG1 includes a distant view area, and specifies the distant view area as the empty area Bs (S13; sky area specifying step). The execution determination unit 13a determines whether or not the predetermined color in which the number of pixels is decreasing is a sky blue (S14; execution determination step). If YES in S14, the process proceeds to S6. On the other hand, in the case of NO in S14, the image generation unit 14 displays the image IMG1 on the display unit 50 without performing the blurring process on the image IMG1 (S8).
<変形例2>
制御部10または10aは、各分割領域A1~A3において、上記関係を表現したグラフのピークが示す画素の数、または所定の色の画素の数が減少している場合に、当該画素を含む領域を遠景領域として特定するが、これに限られない。例えば、制御部10または10aは、分割領域A1~A3にかけて、全体としては上記画素の数が減少傾向にあるが、その一部において画素の数が一定である部分があっても、当該画素を含む領域を遠景領域として特定してもよい。すなわち、上記画素の数が、画像IMG1における上側に位置する分割領域から下側に位置する分割領域に向かって減少している分割領域の集合がある場合に、当該画素を含む領域を遠景領域として特定してよい。例えば、分割領域A1およびA2における画素の数が一定で、分割領域A2およびA3において画素の数が減少しているような場合には、分割領域A2およびA3が上記分割領域の集合に該当する。この場合、隣接する各分割領域の全体に青空が広がっているような場合において、当該分割領域の空領域Bsをぼかし処理の対象とすることができる。特に、実施形態2のように分割領域を細かく設定する場合に、隣接する分割領域において画素の数が一定となり得ることが想定できるが、このような場合でも遠景領域の特定を精度良く行うことが可能となる。 <Modification 2>
When the number of pixels indicated by the peak of the graph expressing the above relationship or the number of pixels of a predetermined color is reduced in each of the divided regions A1 to A3, the control unit 10 or 10a is a region including the pixel. Is specified as a distant view area, but is not limited thereto. For example, the controller 10 or 10a tends to decrease the number of pixels as a whole in the divided regions A1 to A3, but even if there is a portion where the number of pixels is constant in some of the pixels, You may identify the area | region to include as a distant view area | region. That is, when there is a set of divided areas in which the number of pixels decreases from the divided area located on the upper side in the image IMG1 toward the divided area located on the lower side, the area including the pixel is defined as a distant view area. May be specified. For example, when the number of pixels in the divided areas A1 and A2 is constant and the number of pixels in the divided areas A2 and A3 is reduced, the divided areas A2 and A3 correspond to the set of divided areas. In this case, when the blue sky spreads over the entire adjacent divided areas, the empty area Bs of the divided areas can be the target of the blurring process. In particular, when the divided areas are set finely as in the second embodiment, it can be assumed that the number of pixels in the adjacent divided areas can be constant, but even in such a case, the distant view area can be accurately identified. It becomes possible.
制御部10または10aは、各分割領域A1~A3において、上記関係を表現したグラフのピークが示す画素の数、または所定の色の画素の数が減少している場合に、当該画素を含む領域を遠景領域として特定するが、これに限られない。例えば、制御部10または10aは、分割領域A1~A3にかけて、全体としては上記画素の数が減少傾向にあるが、その一部において画素の数が一定である部分があっても、当該画素を含む領域を遠景領域として特定してもよい。すなわち、上記画素の数が、画像IMG1における上側に位置する分割領域から下側に位置する分割領域に向かって減少している分割領域の集合がある場合に、当該画素を含む領域を遠景領域として特定してよい。例えば、分割領域A1およびA2における画素の数が一定で、分割領域A2およびA3において画素の数が減少しているような場合には、分割領域A2およびA3が上記分割領域の集合に該当する。この場合、隣接する各分割領域の全体に青空が広がっているような場合において、当該分割領域の空領域Bsをぼかし処理の対象とすることができる。特に、実施形態2のように分割領域を細かく設定する場合に、隣接する分割領域において画素の数が一定となり得ることが想定できるが、このような場合でも遠景領域の特定を精度良く行うことが可能となる。 <
When the number of pixels indicated by the peak of the graph expressing the above relationship or the number of pixels of a predetermined color is reduced in each of the divided regions A1 to A3, the
また、画像IMG1の全体において上記画素の数が一定の場合(すなわち分割領域A1~A3における上記画素の数が一定の場合)に、画像IMG1に遠景領域が含まれると判定してもよい。この場合、画像IMG1の全体が青空である場合についても空領域Bsに対してぼかし処理を行うことができる。
Further, when the number of pixels is constant in the entire image IMG1 (that is, when the number of pixels in the divided areas A1 to A3 is constant), it may be determined that the image IMG1 includes a distant view area. In this case, the blurring process can be performed on the sky region Bs even when the entire image IMG1 is a blue sky.
<変形例3>
制御部10または10aは、画像IMG1におけるエッジを検出してもよい。この場合、制御部10または10aは、記憶部70から画像データを取得後に、エッジを検出する。これにより、画像IMG1における対象物Obj1~Obj4(例えば対象物Obj1~Obj4の輪郭)を検出する。 <Modification 3>
The control unit 10 or 10a may detect an edge in the image IMG1. In this case, the control unit 10 or 10a detects the edge after acquiring the image data from the storage unit 70. Thereby, the objects Obj1 to Obj4 (for example, the contours of the objects Obj1 to Obj4) in the image IMG1 are detected.
制御部10または10aは、画像IMG1におけるエッジを検出してもよい。この場合、制御部10または10aは、記憶部70から画像データを取得後に、エッジを検出する。これにより、画像IMG1における対象物Obj1~Obj4(例えば対象物Obj1~Obj4の輪郭)を検出する。 <
The
画像生成部14が空領域Bsにぼかし処理を行うためにローパスフィルタを適用した場合、空領域Bsとの境界を有する対象物Obj1・Obj3のテクスチャが低下する可能性、または当該対象物Obj1・Obj3のエッジが鮮明になる可能性がある。エッジを検出した場合には、空領域Bsをぼかすとともに、対象物Obj1・Obj3のエッジまたはエッジ近傍のみを確実にぼかすことができるため、上記可能性を低減することができる。
When the image generation unit 14 applies a low-pass filter to blur the sky region Bs, the texture of the objects Obj1 and Obj3 having the boundary with the sky region Bs may be reduced, or the objects Obj1 and Obj3 The edges may become sharper. When an edge is detected, the sky region Bs can be blurred, and only the edge of the objects Obj1 and Obj3 or the vicinity of the edge can be surely blurred. Therefore, the above possibility can be reduced.
ただし、空領域Bsと、空領域Bsと隣接する対象物との境界において、ある程度の幅でのぼかし処理が行われれば、空領域Bsの見え方の向上を図ることはできる。したがって、エッジの検出は必須ではない。すなわち、上記のように、エッジを検出して対象物Obj1~Obj4を検出する必要は必ずしもない。
However, if a blurring process with a certain width is performed at the boundary between the sky region Bs and the object adjacent to the sky region Bs, the appearance of the sky region Bs can be improved. Therefore, edge detection is not essential. That is, as described above, it is not always necessary to detect the objects Obj1 to Obj4 by detecting the edge.
<変形例4>
上記では、画像IMG1の、ぼかし処理を行った空領域Bs以外の領域については特段の処理を行っていないが、これに限られない。例えば、制御部10または10aは、空領域Bsに対してぼかし処理を行った画像と、画像IMG1に別の画像処理を行うことにより得られた画像とを合成することにより、処理後画像IMG2を生成してもよい。但し、2つの画像を構成する画素において異なる処理が行われている場合には、空領域Bsに対するぼかし処理を優先適用する。 <Modification 4>
In the above, special processing is not performed on the region other than the sky region Bs on which the blurring processing has been performed on the image IMG1, but the present invention is not limited to this. For example, the control unit 10 or 10a combines the image obtained by performing the blurring process on the sky region Bs and the image obtained by performing another image processing on the image IMG1, thereby obtaining the processed image IMG2. It may be generated. However, when different processing is performed on the pixels constituting the two images, the blurring processing for the sky region Bs is preferentially applied.
上記では、画像IMG1の、ぼかし処理を行った空領域Bs以外の領域については特段の処理を行っていないが、これに限られない。例えば、制御部10または10aは、空領域Bsに対してぼかし処理を行った画像と、画像IMG1に別の画像処理を行うことにより得られた画像とを合成することにより、処理後画像IMG2を生成してもよい。但し、2つの画像を構成する画素において異なる処理が行われている場合には、空領域Bsに対するぼかし処理を優先適用する。 <
In the above, special processing is not performed on the region other than the sky region Bs on which the blurring processing has been performed on the image IMG1, but the present invention is not limited to this. For example, the
上記別の画像処理としては、例えば、所定の条件に従って行われる強調処理またはぼかし処理が挙げられる。このような所定の条件に従って強調処理またはぼかし処理を行う場合の一例については、実施形態3以降で説明する。
Examples of the other image processing include enhancement processing or blurring processing performed according to a predetermined condition. An example of performing enhancement processing or blurring processing according to such a predetermined condition will be described in the third and subsequent embodiments.
<変形例5>
上記では、表示装置100は、制御部10、および表示部50を備えているものとして説明したが、これに限らず、例えば、表示部50を有する表示装置と通信可能に接続できる外部装置(画像処理装置)が、制御部10の画像処理機能を有していてもよい。なお、表示装置100aについても同様のことがいえる。 <Modification 5>
In the above description, thedisplay device 100 has been described as including the control unit 10 and the display unit 50. However, the display device 100 is not limited thereto, and, for example, an external device (image) that can be communicably connected to the display device having the display unit 50. The processing apparatus) may have the image processing function of the control unit 10. The same applies to the display device 100a.
上記では、表示装置100は、制御部10、および表示部50を備えているものとして説明したが、これに限らず、例えば、表示部50を有する表示装置と通信可能に接続できる外部装置(画像処理装置)が、制御部10の画像処理機能を有していてもよい。なお、表示装置100aについても同様のことがいえる。 <
In the above description, the
<主たる効果>
表示装置100または100aは、空領域Bsの色に基づいてぼかし処理の可否を判定し、ぼかし処理を行うと判定した場合には、空領域Bsに対してぼかし処理を行う。ぼかし処理を行う対象となる空領域Bsの色は面色と認識できる色(例:青空)である。そのため、面色と認識される空領域Bsについては、エッジまたはテクスチャを排除することにより、面色としての視認性を高め、自然界における実際の空の見え方に近づけることができる。 <Main effects>
The display device 100 or 100a determines whether or not to perform the blurring process based on the color of the sky area Bs, and performs the blurring process on the sky area Bs when it is determined to perform the blurring process. The color of the sky region Bs to be subjected to the blurring process is a color that can be recognized as a surface color (for example, blue sky). For this reason, with respect to the sky region Bs recognized as the surface color, the visibility as the surface color can be improved by removing the edge or texture, and can be brought close to the actual sky appearance in the natural world.
表示装置100または100aは、空領域Bsの色に基づいてぼかし処理の可否を判定し、ぼかし処理を行うと判定した場合には、空領域Bsに対してぼかし処理を行う。ぼかし処理を行う対象となる空領域Bsの色は面色と認識できる色(例:青空)である。そのため、面色と認識される空領域Bsについては、エッジまたはテクスチャを排除することにより、面色としての視認性を高め、自然界における実際の空の見え方に近づけることができる。 <Main effects>
The
〔実施形態2〕
本開示の他の実施形態について、図8に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 2]
The following describes another embodiment of the present disclosure with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本開示の他の実施形態について、図8に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 2]
The following describes another embodiment of the present disclosure with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
図8は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域B1~B6の一例を示す図である。上述のように、表示装置100では、複数の分割領域A1~A3の幅hは、画像IMG1の幅Hがおよそ3等分となるように設定されていた。
FIG. 8 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions B1 to B6 formed in the image IMG1. As described above, in the display device 100, the width h of the plurality of divided regions A1 to A3 is set so that the width H of the image IMG1 is approximately equal to three.
一方、本実施形態の表示装置200では、複数の分割領域B1~B6のうちの少なくとも2つの分割領域では、その幅が異なっている。具体的には、表示装置200では、図8に示すように、分割領域B5・B6(下側に位置する分割領域)の幅hb(上下方向の長さ)よりも分割領域B1~B4(上側に位置する分割領域)の幅ha(上下方向の長さ)の方が短くなっている。分割領域B1~B4は、画像IMG1における上半分の領域に設定され、分割領域B5・B6は、画像IMG1における下半分の領域に設定される。また、本例では、分割領域B1~B4の各幅haは、画像IMG1の幅Hの1/8に設定され、分割領域B5・B6の幅hbは、当該幅Hの1/4に設定されている。
On the other hand, in the display device 200 of the present embodiment, at least two of the plurality of divided regions B1 to B6 have different widths. Specifically, in the display device 200, as shown in FIG. 8, the divided regions B1 to B4 (upper side) are larger than the width hb (length in the vertical direction) of the divided regions B5 and B6 (lower divided regions). The width ha (the length in the vertical direction) of the divided region) is shorter. The divided areas B1 to B4 are set as upper half areas in the image IMG1, and the divided areas B5 and B6 are set as lower half areas in the image IMG1. In this example, the width ha of the divided areas B1 to B4 is set to 1/8 of the width H of the image IMG1, and the width hb of the divided areas B5 and B6 is set to 1/4 of the width H. ing.
一般に、画像IMG1に空領域Bsが存在する場合、空領域Bsが画像IMG1の下半分の領域のみに現れることはほとんどない。そのため、下側に位置する分割領域B5・B6の幅hbよりも、上側に位置する分割領域B1~B4の幅haを短く設定することにより、空領域Bsの特定をより精度良く行うことが可能となる。
Generally, when the empty area Bs exists in the image IMG1, the empty area Bs hardly appears only in the lower half area of the image IMG1. Therefore, the empty area Bs can be specified more accurately by setting the width ha of the divided areas B1 to B4 positioned above the width hb of the divided areas B5 and B6 positioned on the lower side. It becomes.
なお、分割領域B1~B6の設定方法は上記に限らず、例えば、複数の分割領域B1~B6の幅ha・hbが下方向に向かって順に(段階的に)長くなるように設定されてもよい。また、上側に位置する分割領域を構成する分割領域B2~B4の少なくとも1つ(例えば分割領域B4)の幅haが、下側に位置する分割領域を構成する分割領域B5・B6の幅hbと同じ長さであってもよい。つまり、上側に位置する分割領域B1~B4を構成する少なくとも1つの分割領域の幅haが、下側に位置する分割領域B5・B6を構成する少なくとも1つの分割領域の幅hbよりも短く設定されていればよい。
Note that the setting method of the divided areas B1 to B6 is not limited to the above, and for example, the widths ha and hb of the plurality of divided areas B1 to B6 may be set so as to increase in order (stepwise) downward. Good. In addition, the width ha of at least one of the divided areas B2 to B4 (for example, divided area B4) constituting the upper divided area is equal to the width hb of the divided areas B5 and B6 constituting the lower divided area. It may be the same length. That is, the width ha of at least one of the divided areas constituting the upper divided areas B1 to B4 is set shorter than the width hb of at least one divided area constituting the lower divided areas B5 and B6. It only has to be.
また、表示装置200は、実施形態1の表示装置100または100aの処理を行うことができ、その場合には、表示装置100または100aの効果を奏することができる。この点については、以下に述べる実施形態3以降の表示装置300~500においても同様である。
Further, the display device 200 can perform the processing of the display device 100 or 100a of the first embodiment, and in that case, the effect of the display device 100 or 100a can be achieved. This also applies to the display devices 300 to 500 in the third and later embodiments described below.
〔実施形態3〕
本開示の他の実施形態について、図9~図15に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 3]
Another embodiment of the present disclosure will be described below with reference to FIGS. 9 to 15. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
本開示の他の実施形態について、図9~図15に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 3]
Another embodiment of the present disclosure will be described below with reference to FIGS. 9 to 15. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
実施形態1では、空領域Bsの色に基づいて空領域Bsに対するぼかし処理を行う場合の処理について説明した。一方、実施形態3に係る表示装置300は、当該処理に加えて、表示部50における鑑賞者90が注視する位置も考慮して、強調処理またはぼかし処理を行うことにより、処理後画像IMG2を生成するものである。
In the first embodiment, the process in the case of performing the blurring process on the sky area Bs based on the color of the sky area Bs has been described. On the other hand, the display device 300 according to the third embodiment generates the processed image IMG2 by performing the enhancement process or the blurring process in consideration of the position where the viewer 90 looks at the display unit 50 in addition to the process. To do.
<表示装置300の構成>
まず、図9~図14を用いて、本実施形態の表示装置300について説明する。図9は、表示装置300の構成の一例を示す図である。図10は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図11~図14は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration ofDisplay Device 300>
First, thedisplay device 300 of this embodiment will be described with reference to FIGS. FIG. 9 is a diagram illustrating an example of the configuration of the display device 300. FIG. 10 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions A1 to A3 formed in the image IMG1. 11 to 14 are diagrams illustrating an example of the processed image IMG2 displayed on the display unit 50. FIG.
まず、図9~図14を用いて、本実施形態の表示装置300について説明する。図9は、表示装置300の構成の一例を示す図である。図10は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図11~図14は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration of
First, the
図9に示すように、表示装置300は、画像を表示するものであり、制御部10b(画像処理装置)、表示部50、注視点検出センサ60(センサ)、および記憶部70を備えている。
As shown in FIG. 9, the display device 300 displays an image, and includes a control unit 10 b (image processing device), a display unit 50, a gazing point detection sensor 60 (sensor), and a storage unit 70. .
注視点検出センサ60は、表示部50における鑑賞者90の注視点Fを検出するものであり、検出した注視点Fを示す注視点データを制御部10bに送信する。注視点検出センサ60は、例えば鑑賞者90の眼球の動きを検出することで、鑑賞者90の視線の動きを検出するアイトラッカー等で実現される。また、注視点Fの位置は、例えば表示部50に任意に設定されたxy座標で表される。
The gazing point detection sensor 60 detects the gazing point F of the viewer 90 on the display unit 50, and transmits gazing point data indicating the detected gazing point F to the control unit 10b. The gazing point detection sensor 60 is realized by, for example, an eye tracker that detects the movement of the line of sight of the viewer 90 by detecting the movement of the eyeball of the viewer 90. Further, the position of the gazing point F is represented by, for example, xy coordinates arbitrarily set on the display unit 50.
<制御部10bの具体的構成>
制御部10bは、表示装置300を統括的に制御するものである。本実施形態では特に、制御部10bは、図10に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、関係特定部11、空領域特定部12、実行可否判定部13、画像生成部14b、注目領域特定部31および対象物検出部32を備えている。すなわち、制御部10bは、画像生成部14b、注目領域特定部31および対象物検出部32を備えている点で、制御部10とは異なる。 <Specific Configuration ofControl Unit 10b>
Thecontrol unit 10b controls the display device 300 in an integrated manner. Particularly in the present embodiment, the control unit 10b has an image processing function for performing a predetermined process on the image IMG1 illustrated in FIG. 10. The relationship specifying unit 11, the empty region specifying unit 12, and the execution feasibility determining unit 13. The image generating unit 14b, the attention area specifying unit 31, and the object detecting unit 32 are provided. That is, the control unit 10b is different from the control unit 10 in that the control unit 10b includes an image generation unit 14b, an attention area specifying unit 31, and an object detection unit 32.
制御部10bは、表示装置300を統括的に制御するものである。本実施形態では特に、制御部10bは、図10に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、関係特定部11、空領域特定部12、実行可否判定部13、画像生成部14b、注目領域特定部31および対象物検出部32を備えている。すなわち、制御部10bは、画像生成部14b、注目領域特定部31および対象物検出部32を備えている点で、制御部10とは異なる。 <Specific Configuration of
The
なお、本実施形態では、制御部10bが制御部10の構成を備えているものとして説明するが、当該構成に代えて、制御部10aの構成を備えることも当然ながら可能である。また、注目領域特定部31および対象物検出部32の処理については、鑑賞者が1人である場合に行われる。すなわち、表示装置300は、鑑賞者90の顔を検出する顔検出機能によって複数の鑑賞者90が表示部50を見ていると判定した場合には、注目領域特定部31および対象物検出部32の処理を行わない。これらの点については、実施形態4および5においても同様である。
In addition, although this embodiment demonstrates as what the control part 10b is equipped with the structure of the control part 10, it can replace with the said structure and can also be equipped with the structure of the control part 10a. The processing of the attention area specifying unit 31 and the object detection unit 32 is performed when there is only one viewer. In other words, when the display device 300 determines that a plurality of viewers 90 are looking at the display unit 50 by the face detection function for detecting the face of the viewer 90, the attention area specifying unit 31 and the object detection unit 32. Do not perform the process. The same applies to the fourth and fifth embodiments.
注目領域特定部31は、注視点検出センサ60が検出した注視点Fに基づき、分割領域A1~A3のうち、鑑賞者90が注目している分割領域である注目領域TAを特定する。すなわち、注目領域特定部31は、分割領域A1~A3において、注目領域TAを特定するとともに、非注目領域(注目領域TA以外の分割領域)を特定する。
The attention area specifying unit 31 specifies the attention area TA, which is the divided area that the viewer 90 is paying attention to, among the divided areas A1 to A3, based on the gazing point F detected by the gazing point detection sensor 60. That is, the attention area specifying unit 31 specifies the attention area TA in the divided areas A1 to A3 and also specifies the non- attention area (the divided area other than the attention area TA).
具体的には、注目領域特定部31は、関係特定部11と同様、記憶部70に格納された画像データを読み出し、画像データが示す画像IMG1の上下方向を特定し、その上下方向にあわせて、画像IMG1を分割領域A1~A3に分割する。
Specifically, like the relationship specifying unit 11, the attention area specifying unit 31 reads the image data stored in the storage unit 70, specifies the up-down direction of the image IMG1 indicated by the image data, and matches the up-down direction. The image IMG1 is divided into divided areas A1 to A3.
注目領域特定部31は、注視点検出センサ60から取得した注視点データが示す注視点(位置座標)Fが、分割領域A1~A3を示す位置座標のいずれと一致するかを判定する。すなわち、注目領域特定部31は、注視点Fが分割領域A1~A3のうちのいずれの分割領域に含まれているかを特定する。そして、注目領域特定部31は、注視点Fを含む分割領域を注目領域TAとして特定する。図10の例では、分割領域A2が注目領域TAとして特定される。また、分割領域A1およびA3が非注目領域として特定される。
The attention area specifying unit 31 determines which of the position coordinates indicating the divided areas A1 to A3 the gazing point (position coordinates) F indicated by the gazing point data acquired from the gazing point detection sensor 60 matches. That is, the attention area specifying unit 31 specifies which of the divided areas A1 to A3 the gazing point F is included in. Then, the attention area specifying unit 31 specifies the divided area including the gazing point F as the attention area TA. In the example of FIG. 10, the divided area A2 is specified as the attention area TA. Further, the divided areas A1 and A3 are specified as non-target areas.
注目領域特定部31は、分割領域A1~A3と、注目領域TAおよび非注目領域との対応関係を示す領域特定データを、対象物検出部32に送信する。
The attention area identification unit 31 transmits area identification data indicating the correspondence between the divided areas A1 to A3, the attention area TA, and the non-attention area to the object detection unit 32.
対象物検出部32は、画像IMG1における対象物Obj1~Obj4を検出する。具体的には、対象物検出部32は、記憶部70に格納された画像データを読み出し、当該画像データが示す画像IMG1におけるエッジ(エッジ領域)を検出することにより、対象物Obj1~Obj4を検出する。
The object detection unit 32 detects the objects Obj1 to Obj4 in the image IMG1. Specifically, the object detection unit 32 reads the image data stored in the storage unit 70 and detects the objects Obj1 to Obj4 by detecting edges (edge regions) in the image IMG1 indicated by the image data. To do.
このエッジおよび対象物は、公知の手法により検出される。対象物検出部32は、例えば、フィルタを用いて、隣接する画素間における画素値の差が所定値以上の領域(すなわち、明るさが急激に変化している領域)をエッジとして検出し、エッジが形成する閉じた領域(または閉じたとみなされる領域)を対象物として検出する。また、画素値の差による検出に限らず、例えば色相を示す値の差が所定値以上の領域(すなわち、色が急激に変化している領域)をエッジとして検出してもよい。また、画像IMG1に、奥行き方向の値を示す深度情報が含まれている場合には、当該深度情報に基づくエッジ処理が行われてもよい。
This edge and object are detected by a known method. The object detection unit 32 uses, for example, a filter to detect, as an edge, a region in which the difference in pixel value between adjacent pixels is equal to or greater than a predetermined value (that is, a region where the brightness is rapidly changed). A closed region formed by (or a region considered to be closed) is detected as an object. In addition, detection is not limited to pixel value differences, and for example, an area where the difference in values indicating the hue is a predetermined value or more (that is, an area where the color changes rapidly) may be detected as an edge. Further, when the image IMG1 includes depth information indicating a value in the depth direction, edge processing based on the depth information may be performed.
また、対象物検出部32は、検出した対象物Obj1~Obj4が、領域特定データが示す注目領域TAまたは非注目領域のいずれの領域内に存在するかを検出し、その検出結果を画像生成部14bに送信する。
Further, the object detection unit 32 detects whether the detected objects Obj1 to Obj4 exist in the attention area TA or the non-attention area indicated by the area specifying data, and the detection result is displayed as an image generation section. 14b.
なお、対象物検出部32のうち、対象物Obj1~Obj4が、注目領域TA内に存在することを検出する機能を有する対象物検出部を、第1対象物検出部と称してもよい。また、対象物検出部32のうち、対象物Obj1~Obj4が、非注目領域内に存在することを検出する機能を有する対象物検出部を、第2対象物検出部と称してもよい。
Of the object detection units 32, the object detection unit having a function of detecting that the objects Obj1 to Obj4 are present in the attention area TA may be referred to as a first object detection unit. Of the object detection units 32, the object detection unit having a function of detecting that the objects Obj1 to Obj4 are present in the non-attention area may be referred to as a second object detection unit.
本実施形態では、対象物検出部32が、第1および第2対象物検出部の両方の機能を併有しているものとして説明を行う。但し、第1および第2対象物検出部のそれぞれが、個別の機能部として設けられてもよい。
In the present embodiment, the object detection unit 32 will be described as having both functions of the first and second object detection units. However, each of the first and second object detection units may be provided as an individual functional unit.
図10の例では、対象物Obj1は注目領域TAである分割領域A2に含まれるとともに、非注目領域である分割領域A1およびA3にも含まれている。対象物Obj2は注目領域TAである分割領域A2に含まれるとともに、非注目領域である分割領域A3にも含まれている。対象物Obj3は注目領域TAである分割領域A2に含まれるとともに、非注目領域である分割領域A1にも含まれている。対象物Obj4は非注目領域である分割領域A3のみに含まれている。
In the example of FIG. 10, the object Obj1 is included in the divided area A2 which is the attention area TA, and is also included in the divided areas A1 and A3 which are non-attention areas. The object Obj2 is included in the divided area A2 that is the attention area TA and is also included in the divided area A3 that is the non-attention area. The object Obj3 is included in the divided area A2 that is the attention area TA and is also included in the divided area A1 that is the non-attention area. The object Obj4 is included only in the divided area A3 that is a non-target area.
従って、対象物検出部32は、対象物Obj1~Obj3については注目領域TA(分割領域A2)に含まれている旨の検出結果を、対象物Obj4については非注目領域(分割領域A3)のみに含まれている旨の検出結果を画像生成部14bに送信する。また、対象物検出部32は、画像IMG1における対象物Obj1~Obj4の位置座標を示す座標データを、上記検出結果とともに画像生成部14bに送信する。なお、対象物が注目領域TAおよび非注目領域の両方に含まれている場合には(図10に示す対象物Obj1~Obj3)、その両方に含まれている(分割領域A2と、分割領域A1および/またはA3とに含まれている)旨の検出結果を送信してもよい。
Accordingly, the object detection unit 32 gives a detection result indicating that the objects Obj1 to Obj3 are included in the attention area TA (divided area A2) and only the non-attention area (divided area A3) for the object Obj4. The detection result indicating that it is included is transmitted to the image generation unit 14b. In addition, the object detection unit 32 transmits coordinate data indicating the position coordinates of the objects Obj1 to Obj4 in the image IMG1 to the image generation unit 14b together with the detection result. When the target object is included in both the attention area TA and the non-target area (target objects Obj1 to Obj3 shown in FIG. 10), they are included in both of them (the divided area A2 and the divided area A1). And / or A3) may be transmitted.
画像生成部14bは、画像生成部14と同様の処理(以下、処理P1とも称する)を行う。すなわち、画像生成部14bは、実行可否判定部13によって空領域Bsに対してぼかし処理を行うと判定された場合には、空領域Bsに対してぼかし処理を行うことにより、画像(以下、画像IAとも称する)を生成する。一方、ぼかし処理を行わないと判定した場合には、画像IMG1を画像IAとして生成する。加えて、画像生成部14bは、画像IMG1に対して、分割領域A1~A3のうち、注目領域特定部31によって特定された注目領域TAの少なくとも一部分において強調処理を行い、かつ、非注目領域の少なくとも一部分においてぼかし処理を行うことにより、画像(以下、画像IBとも称する)を生成する(以下、処理P2とも称する)。なお、この強調処理は、例えばフィルタ(エッジ強調用のフィルタ)を用いた公知の手法により実現される。そして、画像生成部14bは、処理P1およびP2によって生成された画像IAおよびIBを合成することにより、処理後画像IMG2を生成する。
The image generation unit 14b performs the same process as the image generation unit 14 (hereinafter also referred to as process P1). That is, when the execution determination unit 13 determines that the blur processing is performed on the sky region Bs, the image generation unit 14b performs the blur processing on the sky region Bs, thereby obtaining an image (hereinafter referred to as an image). IA). On the other hand, if it is determined not to perform the blurring process, the image IMG1 is generated as the image IA. In addition, the image generation unit 14b performs an enhancement process on the image IMG1 in at least a part of the attention area TA specified by the attention area specification unit 31 among the divided areas A1 to A3, and the non-target area. An image (hereinafter also referred to as image IB) is generated (hereinafter also referred to as process P2) by performing blurring processing at least in part. This enhancement processing is realized by a known method using, for example, a filter (edge enhancement filter). Then, the image generation unit 14b generates the processed image IMG2 by combining the images IA and IB generated by the processes P1 and P2.
ただし、画像IAおよびIBにおいて対応する画素に対する処理が異なる場合であって、画像IAが空領域Bsに対するぼかし処理が行われたものである場合には、画像IAの画素(ぼかし処理が行われた空領域Bsの画素)が優先適用される。すなわち、画像生成部14bは、空領域Bsに対してぼかし処理を行った場合には、処理後画像IMG2に当該処理の結果を適用する。
However, if the processing for the corresponding pixels in the images IA and IB is different, and the image IA has been subjected to the blurring process for the sky region Bs, the pixel of the image IA (the blurring process has been performed) The pixels in the sky region Bs) are preferentially applied. In other words, when the blur process is performed on the sky region Bs, the image generation unit 14b applies the result of the process to the post-processing image IMG2.
換言すれば、画像生成部14bは、注目領域TAのうち、ぼかし処理を行う空領域Bs以外の非処理領域については、その少なくとも一部分に対して強調処理を行うものである。
In other words, the image generation unit 14b performs enhancement processing on at least a part of the non-processing area other than the sky area Bs in which the blurring process is performed in the attention area TA.
<画像IAおよびIBの合成処理の具体例>
画像生成部14bは、画像IMG1の複数の画素のそれぞれについて、画素が示す色が空色であるか否かを逐次判定する。この判定処理としては、例えば、実施形態1で述べた実行可否判定部13での判定処理を適用することができる。そして、空色と判定された画素については画像IAにおいて対応する画素を示す画素データ、空色でないと判定された画素については画像IBにおいて対応する画素を示す画素データをそれぞれ出力することにより、処理後画像IMG2を生成する。 <Specific Example of Composition Processing of Images IA and IB>
Theimage generation unit 14b sequentially determines whether or not the color indicated by the pixel is sky blue for each of the plurality of pixels of the image IMG1. As this determination process, for example, the determination process in the execution determination unit 13 described in the first embodiment can be applied. Then, the pixel data indicating the corresponding pixel in the image IA is output for the pixel determined to be sky blue, and the pixel data indicating the corresponding pixel in the image IB is output for the pixel determined not to be sky blue, respectively. IMG2 is generated.
画像生成部14bは、画像IMG1の複数の画素のそれぞれについて、画素が示す色が空色であるか否かを逐次判定する。この判定処理としては、例えば、実施形態1で述べた実行可否判定部13での判定処理を適用することができる。そして、空色と判定された画素については画像IAにおいて対応する画素を示す画素データ、空色でないと判定された画素については画像IBにおいて対応する画素を示す画素データをそれぞれ出力することにより、処理後画像IMG2を生成する。 <Specific Example of Composition Processing of Images IA and IB>
The
上記に限らず、次のように処理後画像IMG2を生成してもよい。画像生成部14bが空領域Bsに対してぼかし処理を行うことにより画像IAを生成している場合、当然ながら画像IMG1における空領域Bsの位置を特定している。そこで、画像生成部14bは、空領域Bsまたは空領域Bs以外をマスクしたマスク画像を生成し、当該マスク画像、画像IAおよび画像IBを用いた論理演算を行うことにより、処理後画像IMG2を生成する。
Not limited to the above, the post-processing image IMG2 may be generated as follows. When the image generation unit 14b generates the image IA by performing the blurring process on the sky region Bs, the position of the sky region Bs in the image IMG1 is naturally specified. Therefore, the image generation unit 14b generates a processed image IMG2 by generating a mask image in which the sky region Bs or other than the sky region Bs is masked, and performing a logical operation using the mask image, the image IA, and the image IB. To do.
マスク画像としては縮小画像を用いてもよい。これは、画像生成部14bの処理が空領域Bsに対するぼかし処理であるため、処理後画像IMG2においてブロック状の弊害が生じにくいためである。例えば、ぼかし処理に9×9のローパスフィルタを用いる場合、画像IMG1の1/8のサイズのマスク画像を用いることも可能である。マスク画像として縮小画像を用いる場合には、マスク画像と画像IAおよびIBとの間で、ブロック内アドレスによる補間処理を行ってもよい。
A reduced image may be used as the mask image. This is because the processing of the image generation unit 14b is a blurring process for the sky region Bs, so that a block-shaped adverse effect is hardly generated in the processed image IMG2. For example, when a 9 × 9 low-pass filter is used for the blurring process, it is possible to use a mask image having a size 1/8 of the image IMG1. When a reduced image is used as a mask image, interpolation processing using an intra-block address may be performed between the mask image and the images IA and IB.
<処理後画像IMG2の具体例>
画像生成部14bは、上述の処理を行うことにより、例えば図11~図14に示す処理後画像IMG2を生成する。 <Specific example of post-processing image IMG2>
Theimage generation unit 14b generates the post-processing image IMG2 shown in FIGS. 11 to 14, for example, by performing the above-described processing.
画像生成部14bは、上述の処理を行うことにより、例えば図11~図14に示す処理後画像IMG2を生成する。 <Specific example of post-processing image IMG2>
The
(図11に示す処理後画像IMG2)
図11に示す処理後画像IMG2には、以下のような処理P1の結果として生成された画像IAが反映されている。すなわち、実行可否判定部13によって空領域Bsがぼかし処理の対象であると判定され、画像生成部14bによって空領域Bsに対してぼかし処理が行われている。このぼかし処理に伴い、空領域Bsと対象物Obj1とのエッジ、および空領域Bsと対象物Obj3とのエッジについてもぼかし処理が行われている。 (Post-processed image IMG2 shown in FIG. 11)
The post-processing image IMG2 shown in FIG. 11 reflects the image IA generated as a result of the following processing P1. That is, the executionpossibility determination unit 13 determines that the sky region Bs is the target of the blurring process, and the image generation unit 14b performs the blurring process on the sky region Bs. With this blurring process, the blurring process is also performed on the edge between the sky region Bs and the object Obj1 and the edge between the sky region Bs and the object Obj3.
図11に示す処理後画像IMG2には、以下のような処理P1の結果として生成された画像IAが反映されている。すなわち、実行可否判定部13によって空領域Bsがぼかし処理の対象であると判定され、画像生成部14bによって空領域Bsに対してぼかし処理が行われている。このぼかし処理に伴い、空領域Bsと対象物Obj1とのエッジ、および空領域Bsと対象物Obj3とのエッジについてもぼかし処理が行われている。 (Post-processed image IMG2 shown in FIG. 11)
The post-processing image IMG2 shown in FIG. 11 reflects the image IA generated as a result of the following processing P1. That is, the execution
また、図11に示す処理後画像IMG2には、以下のような処理P2の結果として生成された画像IBが反映されている。
Further, the processed image IMG2 shown in FIG. 11 reflects the image IB generated as a result of the following process P2.
具体的には、画像生成部14bは、処理P2として、対象物検出部32から取得した検出結果に基づいて、注目領域TA内に少なくとも一部分が存在する対象物に対して強調処理を行い、かつ、非注目領域内にのみ存在する対象物に対してぼかし処理を行う(処理PA)。
Specifically, as the process P2, the image generation unit 14b performs an enhancement process on an object having at least a part in the attention area TA based on the detection result acquired from the object detection unit 32, and Then, the blurring process is performed on the object existing only in the non-attention area (processing PA).
図10の例では、画像IMG1において、対象物Obj1~Obj3については、その一部分が注目領域TAに含まれている。そのため、画像生成部14bは、上記座標データを用いて画像IMG1における対象物Obj1~Obj3の位置を特定し、対象物Obj1~Obj3について強調処理を行う。すなわち、対象物Obj1~Obj3のエッジが強調される。また、対象物の全体が注目領域TAに含まれている場合についても、画像生成部14bは、当該対象物について強調処理を行う。
In the example of FIG. 10, a part of the objects Obj1 to Obj3 is included in the attention area TA in the image IMG1. Therefore, the image generation unit 14b specifies the positions of the objects Obj1 to Obj3 in the image IMG1 using the coordinate data, and performs an enhancement process on the objects Obj1 to Obj3. That is, the edges of the objects Obj1 to Obj3 are emphasized. In addition, even when the entire target object is included in the attention area TA, the image generation unit 14b performs enhancement processing on the target object.
一方、画像IMG1において、対象物Obj4については非注目領域(分割領域A3)のみに存在する。そのため、画像生成部14bは、上記座標データを用いて画像IMG1における対象物Obj4の位置を特定し、対象物Obj4についてはぼかし処理を行う。すなわち、対象物Obj4のエッジがぼかされる。
On the other hand, in the image IMG1, the object Obj4 exists only in the non-target region (divided region A3). Therefore, the image generation unit 14b specifies the position of the object Obj4 in the image IMG1 using the coordinate data, and performs a blurring process on the object Obj4. That is, the edge of the object Obj4 is blurred.
なお、図10の例では、上述のように、対象物Obj1は分割領域A1~A3の全てにまたがって存在している。そのため、注目領域TAが分割領域A1またはA3のいずれであっても、対象物Obj1については強調処理が行われる。また、対象物Obj2は分割領域A2およびA3にまたがって存在しているため、対象物Obj2については、注目領域TAが分割領域A2またはA3のいずれかの場合に強調処理が行われ、分割領域A1の場合にはぼかし処理が行われる。対象物Obj3は分割領域A1およびA2にまたがって存在しているため、対象物Obj3については、注目領域TAが分割領域A1またはA2のいずれかの場合に強調処理が行われ、分割領域A3の場合にぼかし処理が行われる。対象物Obj4は分割領域A3のみに存在しているので、対象物Obj4については、注目領域TAが分割領域A3の場合に強調処理が行われ、分割領域A1およびA2の場合にぼかし処理が行われる。
In the example of FIG. 10, as described above, the object Obj1 exists over all the divided areas A1 to A3. Therefore, the enhancement process is performed on the object Obj1 regardless of whether the attention area TA is the divided area A1 or A3. Since the object Obj2 exists across the divided areas A2 and A3, the object Obj2 is subjected to enhancement processing when the attention area TA is either the divided area A2 or A3, and the divided area A1. In the case of, a blurring process is performed. Since the object Obj3 exists across the divided areas A1 and A2, the object Obj3 is subjected to enhancement processing when the attention area TA is either the divided area A1 or A2, and the object Obj3 is the divided area A3. The blurring process is performed. Since the object Obj4 exists only in the divided area A3, the object Obj4 is subjected to enhancement processing when the attention area TA is the divided area A3, and is subjected to blurring processing when the attention area TA is the divided areas A1 and A2. .
そして、画像生成部14bは、上記の処理によって生成された画像IAおよびIBを、画像IAのぼかし処理を行った空領域Bsが優先適用されるように合成することにより、図11に示す処理後画像IMG2を生成する。処理P2(PA)で、対象物Obj1~Obj3の全エッジが強調されているが、処理後画像IMG2では、処理P1でぼかし処理の対象となった上記(i)および(ii)のエッジについてはぼかされている。換言すれば、画像生成部14bは、注目領域TAに少なくともその一部が含まれる対象物Obj1~Obj3については、上記非処理領域に含まれる部分のみに強調処理を行う。
Then, the image generation unit 14b combines the images IA and IB generated by the above processing so that the sky region Bs subjected to the blurring processing of the image IA is preferentially applied, thereby performing the post-processing shown in FIG. An image IMG2 is generated. In the process P2 (PA), all the edges of the objects Obj1 to Obj3 are emphasized. However, in the post-processed image IMG2, the edges (i) and (ii) that have been subjected to the blurring process in the process P1 are described. It is blurred. In other words, for the objects Obj1 to Obj3 whose at least part is included in the attention area TA, the image generation unit 14b performs enhancement processing only on the part included in the non-processing area.
なお、図10の例では、分割領域A1の大部分が空領域Bsである。そのため、分割領域A1に注視点Fがあったとしても、画像IAのぼかし処理が行われた空領域Bsが優先適用されるため、分割領域A1に含まれる対象物Obj3のエッジはぼかされることになる。
In the example of FIG. 10, most of the divided area A1 is the empty area Bs. Therefore, even if there is a gazing point F in the divided area A1, the sky area Bs subjected to the blurring process of the image IA is preferentially applied, and thus the edge of the object Obj3 included in the divided area A1 is blurred. Become.
(図12に示す処理後画像IMG2)
図12に示す処理後画像IMG2には、図11と同じ処理P1の結果として生成された画像IAが反映されている。 (Post-processed image IMG2 shown in FIG. 12)
The post-processing image IMG2 illustrated in FIG. 12 reflects the image IA generated as a result of the same processing P1 as in FIG.
図12に示す処理後画像IMG2には、図11と同じ処理P1の結果として生成された画像IAが反映されている。 (Post-processed image IMG2 shown in FIG. 12)
The post-processing image IMG2 illustrated in FIG. 12 reflects the image IA generated as a result of the same processing P1 as in FIG.
また、図12に示す処理後画像IMG2には、以下のような処理P2の結果として生成された画像IBが反映されている。
Further, the processed image IMG2 shown in FIG. 12 reflects an image IB generated as a result of the following process P2.
具体的には、画像生成部14bは、処理P2として、画像IMG1における対象物Obj1~Obj4の最下端Obj1b~Obj4b(下端)の位置に基づき、注目領域TA内に少なくとも一部分が存在する対象物に対して強調処理を行うか否かを判定する(処理PB)。
Specifically, the image generation unit 14b performs processing P2 on an object having at least a part in the attention area TA based on the position of the lowermost ends Obj1b to Obj4b (lower ends) of the objects Obj1 to Obj4 in the image IMG1. It is then determined whether or not to perform enhancement processing (processing PB).
この場合、画像生成部14bは、例えば、上記検出結果を用いて、対象物Obj1~Obj4のそれぞれについて、その全体が注目領域TAまたは非注目領域のみに含まれているか否かを判定する。画像生成部14bは、対象物の全体が注目領域TAに含まれていると判定した場合には、その対象物について強調処理を行う。また、画像生成部14bは、対象物の全体が非注目領域のみに含まれていると判定した場合には、その対象物についてぼかし処理を行う。
In this case, for example, the image generation unit 14b determines whether or not each of the objects Obj1 to Obj4 is included only in the attention area TA or the non-attention area using the detection result. If the image generation unit 14b determines that the entire object is included in the attention area TA, the image generation unit 14b performs enhancement processing on the object. In addition, when the image generation unit 14b determines that the entire object is included only in the non-attention area, the image generation unit 14b performs a blurring process on the object.
一方、画像生成部14bは、対象物Obj1~Obj4について、その全体が注目領域TAに含まれていない(すなわち、対象物が注目領域TAと非注目領域とにまたがって存在する)と判定した場合には、上記座標データを用いて、当該対象物Obj1~Obj4の、画像IMG1における最下端Obj1b~Obj4bの位置座標を特定する。そして、画像生成部14bは、当該位置座標が注目領域TAよりも下端に存在する分割領域に含まれているか否かを判定する。
On the other hand, when the image generation unit 14b determines that the entire objects Obj1 to Obj4 are not included in the attention area TA (that is, the object exists across the attention area TA and the non-attention area). In this case, the position coordinates of the lowest ends Obj1b to Obj4b in the image IMG1 of the objects Obj1 to Obj4 are specified using the coordinate data. Then, the image generation unit 14b determines whether or not the position coordinates are included in the divided area that exists at the lower end of the attention area TA.
画像生成部14bは、上記位置座標が注目領域TAよりも下端に存在する分割領域に含まれていると判定した場合には、その対象物については強調処理を行うと判定する。一方、画像生成部14bは、上記位置座標が注目領域TAよりも下端に存在する分割領域に含まれていない(すなわち、対象物の最下端が注目領域TAに存在する)と判定した場合には、その対象物についてはぼかし処理を行うと判定する。
When the image generation unit 14b determines that the position coordinates are included in the divided region existing at the lower end of the attention region TA, the image generation unit 14b determines that the enhancement process is performed on the target object. On the other hand, when the image generation unit 14b determines that the position coordinates are not included in the divided area existing at the lower end of the attention area TA (that is, the lowermost end of the target exists in the attention area TA). The object is determined to be blurred.
図10の例では、上述のとおり、対象物Obj1~Obj3の一部分が注目領域TA(分割領域A2)に含まれており、対象物Obj4の全体が非注目領域(分割領域A3)のみに含まれている。そのため、画像生成部14bは、対象物Obj4についてぼかし処理を行うと判定する。一方、対象物Obj1~Obj3については、画像生成部14bは、その最下端Obj1b~Obj3bが注目領域TAよりも下端の分割領域A3に含まれているか否かを判定する。
In the example of FIG. 10, as described above, a part of the objects Obj1 to Obj3 is included in the attention area TA (divided area A2), and the entire object Obj4 is included only in the non-attention area (divided area A3). ing. For this reason, the image generation unit 14b determines to perform the blurring process on the object Obj4. On the other hand, for the objects Obj1 to Obj3, the image generation unit 14b determines whether or not the lowermost end Obj1b to Obj3b is included in the divided area A3 at the lower end than the attention area TA.
対象物Obj1の最下端Obj1b、および、対象物Obj2の最下端Obj2bはそれぞれ、分割領域A3に含まれている。このため、画像生成部14bは、対象物Obj1およびObj2については強調処理を行うと判定する。一方、対象物Obj3の最下端Obj3bは、分割領域A3に含まれていない(注目領域TAに含まれている)。このため、画像生成部14bは、対象物Obj3についてはぼかし処理を行うと判定する。
The bottom end Obj1b of the object Obj1 and the bottom end Obj2b of the object Obj2 are each included in the divided area A3. For this reason, the image generation unit 14b determines that the enhancement processing is performed on the objects Obj1 and Obj2. On the other hand, the lowermost end Obj3b of the object Obj3 is not included in the divided area A3 (included in the attention area TA). For this reason, the image generation unit 14b determines to perform the blurring process on the object Obj3.
すなわち、画像生成部14bは、注目領域TAが分割領域A2であれば、その近傍に主として存在する対象物Obj1およびObj2に鑑賞者90が注目していると判断し、対象物Obj1およびObj2に対する強調処理を行い、それ以外の対象物Obj3およびObj4についてはぼかし処理を行う。従って、鑑賞者90が注目領域TA内の対象物Obj3(山の中腹辺り)を見ていても、注目領域TAが分割領域A2であれば対象物Obj3を注視してないと判断し、対象物Obj3に対してはぼかし処理が行われる。
That is, if the attention area TA is the divided area A2, the image generation unit 14b determines that the viewer 90 is paying attention to the objects Obj1 and Obj2 that mainly exist in the vicinity thereof, and emphasizes the objects Obj1 and Obj2. Processing is performed, and blurring processing is performed on the other objects Obj3 and Obj4. Therefore, even if the viewer 90 looks at the object Obj3 (around the mountainside) in the attention area TA, if the attention area TA is the divided area A2, it is determined that the object Obj3 is not being watched, and the object Blur processing is performed on Obj3.
そして図11の場合と同様、画像生成部14bは、画像IAおよびIBを合成することにより、図12に示す処理後画像IMG2を生成する。図12に示す処理後画像IMG2では、対象物Obj1・Obj2について処理P2(PB)では全エッジが強調されているが、処理P1が反映されることにより、対象物Obj1については上記非処理領域に含まれる一部のエッジのみが強調されている。
As in the case of FIG. 11, the image generation unit 14b generates the processed image IMG2 shown in FIG. 12 by combining the images IA and IB. In the post-processing image IMG2 shown in FIG. 12, all edges of the objects Obj1 and Obj2 are emphasized in the process P2 (PB), but the object Obj1 is reflected in the non-processing area by reflecting the process P1. Only some of the included edges are highlighted.
換言すれば、画像生成部14bは、上記最下端Obj1b~Obj4bの位置に基づき、上記非処理領域内に少なくとも一部分が存在する対象物Obj1~Obj4に対して強調処理を行うか否かを判定する。そして、画像生成部14bは、その判定結果に基づいて、対象物の、上記非処理領域に含まれる部分のみ(図12の例では対象物Obj1の一部および対象物Obj2の全体)に強調処理を行う。
In other words, the image generation unit 14b determines whether or not to perform enhancement processing on the objects Obj1 to Obj4 having at least a part in the non-processing area based on the position of the lowest end Obj1b to Obj4b. . Then, based on the determination result, the image generation unit 14b performs enhancement processing on only a portion of the target object included in the non-processing region (a part of the target object Obj1 and the entire target object Obj2 in the example of FIG. 12). I do.
(その他の処理後画像IMG2)
画像IAにおいて空領域Bsに対してぼかし処理がなされていない場合、当該画像IAと、上述の処理PAまたはPBの結果として生成される画像IBとを合成した結果、画像IBがそのまま処理後画像IMG2に反映される。なお、この場合の画像IMG1は、空領域Bsとは異なり、視認可能なエッジまたはテクスチャを含む空領域Bsn(例:夕焼けの空)である。 (Other post-processing image IMG2)
When the blur process is not performed on the sky region Bs in the image IA, as a result of combining the image IA and the image IB generated as a result of the above-described processing PA or PB, the image IB is processed as it is, and the processed image IMG2 It is reflected in. Note that the image IMG1 in this case is a sky region Bsn (eg, sunset sky) including a visible edge or texture, unlike the sky region Bs.
画像IAにおいて空領域Bsに対してぼかし処理がなされていない場合、当該画像IAと、上述の処理PAまたはPBの結果として生成される画像IBとを合成した結果、画像IBがそのまま処理後画像IMG2に反映される。なお、この場合の画像IMG1は、空領域Bsとは異なり、視認可能なエッジまたはテクスチャを含む空領域Bsn(例:夕焼けの空)である。 (Other post-processing image IMG2)
When the blur process is not performed on the sky region Bs in the image IA, as a result of combining the image IA and the image IB generated as a result of the above-described processing PA or PB, the image IB is processed as it is, and the processed image IMG2 It is reflected in. Note that the image IMG1 in this case is a sky region Bsn (eg, sunset sky) including a visible edge or texture, unlike the sky region Bs.
処理PAが行われた場合、処理後画像IMG2では、図13に示すように強調またはぼかし処理が行われている。一方、処理PBが行われた場合、処理後画像IMG2では、図14に示すように強調またはぼかし処理が行われている。
When the processing PA is performed, the post-processing image IMG2 is subjected to enhancement or blurring processing as shown in FIG. On the other hand, when the processing PB is performed, the post-processing image IMG2 is subjected to enhancement or blurring processing as shown in FIG.
<表示装置300における処理>
次に、図15を用いて、表示装置300における処理(画像処理装置の制御方法)の一例について説明する。図15は、表示装置300における処理の一例を示すフローチャートである。なお、図15に示す空領域ぼかし判定処理(SA;空領域特定工程、実行可否判定工程)は、図5におけるS1~S7およびS9を指し、その結果として画像IAが生成される。なお、当該処理に代えて、図7に示すS11~S14、S6、S7およびS9の処理によって画像IAが生成されてもよい。 <Processing inDisplay Device 300>
Next, an example of processing in the display apparatus 300 (control method of the image processing apparatus) will be described with reference to FIG. FIG. 15 is a flowchart illustrating an example of processing in thedisplay device 300. Note that the sky region blur determination process (SA; sky region specifying step, execution feasibility determination step) shown in FIG. 15 refers to S1 to S7 and S9 in FIG. 5, and as a result, an image IA is generated. Instead of this processing, the image IA may be generated by the processing of S11 to S14, S6, S7, and S9 shown in FIG.
次に、図15を用いて、表示装置300における処理(画像処理装置の制御方法)の一例について説明する。図15は、表示装置300における処理の一例を示すフローチャートである。なお、図15に示す空領域ぼかし判定処理(SA;空領域特定工程、実行可否判定工程)は、図5におけるS1~S7およびS9を指し、その結果として画像IAが生成される。なお、当該処理に代えて、図7に示すS11~S14、S6、S7およびS9の処理によって画像IAが生成されてもよい。 <Processing in
Next, an example of processing in the display apparatus 300 (control method of the image processing apparatus) will be described with reference to FIG. FIG. 15 is a flowchart illustrating an example of processing in the
まず、注目領域特定部31は、画像IMG1を分割領域A1~A3に分割する。そして、注目領域特定部31は、注視点検出センサ60が検出した注視点Fを示す注視点データに基づき、分割した分割領域A1~A3から注目領域TAを特定する(S21)。次に、対象物検出部32は、記憶部70から画像データを読み出し(S22)、当該画像データが示す画像IMG1におけるエッジを検出する(S23)。そして、対象物検出部32は、検出したエッジを用いて、画像IMG1内に含まれる対象物Obj1~Obj4を検出し(S24)、対象物Obj1~Obj4が注目領域TAまたは非注目領域のいずれの領域内に存在するかを検出(判定)する(S25)。
First, the attention area specifying unit 31 divides the image IMG1 into divided areas A1 to A3. Then, the attention area specifying unit 31 specifies the attention area TA from the divided areas A1 to A3 based on the gazing point data indicating the gazing point F detected by the gazing point detection sensor 60 (S21). Next, the object detection unit 32 reads the image data from the storage unit 70 (S22), and detects an edge in the image IMG1 indicated by the image data (S23). Then, the object detection unit 32 detects the objects Obj1 to Obj4 included in the image IMG1 using the detected edge (S24), and the objects Obj1 to Obj4 are either the attention area TA or the non-attention area. It is detected (determined) whether it exists in the area (S25).
その後、画像生成部14bは、対象物検出部32の検出結果に基づき、画像IBを生成する。具体的には、画像生成部14bは、画像IMG1に対して、注目領域TAの少なくとも一部において強調処理を行うとともに(S26)、非注目領域の少なくとも一部においてぼかし処理を行う(S27)。具体的には、S26およびS27において、上記処理PAまたはPBが行われ、その結果として画像IBが生成される。
Thereafter, the image generation unit 14b generates an image IB based on the detection result of the object detection unit 32. Specifically, the image generation unit 14b performs enhancement processing on the image IMG1 in at least a part of the attention area TA (S26), and performs blurring processing on at least a part of the non-attention area (S27). Specifically, the processing PA or PB is performed in S26 and S27, and as a result, the image IB is generated.
画像生成部14bは、空領域ぼかし判定処理(SA)によって生成された画像IAと、S21~S27の処理によって生成された画像IBとを合成することにより、処理後画像IMG2を生成する(S28;画像生成工程)。そして、画像生成部14bは、生成した処理後画像IMG2を表示部50に表示させる(S8)。
The image generation unit 14b generates the processed image IMG2 by combining the image IA generated by the sky region blur determination process (SA) and the image IB generated by the processes of S21 to S27 (S28; Image generation step). Then, the image generation unit 14b displays the generated processed image IMG2 on the display unit 50 (S8).
なお、S21の処理と、S22~S24の処理とは並行して行われてもよいし、S22~S24の処理後にS21の処理が行われてもよい。また、S26およびS27の処理についても、並行して行われてもよいし、S27の処理後にS26の処理が行われてもよい。また、SAの処理後にS21~S27の処理が行われてもよいし、S21~S27の処理後にSAの処理が行われてもよい。
Note that the processing of S21 and the processing of S22 to S24 may be performed in parallel, or the processing of S21 may be performed after the processing of S22 to S24. Also, the processes of S26 and S27 may be performed in parallel, or the process of S26 may be performed after the process of S27. Further, the processing of S21 to S27 may be performed after the processing of SA, or the processing of SA may be performed after the processing of S21 to S27.
<主たる効果>
表示装置300は、空領域Bsに対するぼかし処理の可否を判定し、判定結果に応じて空領域Bsに対してぼかし処理を行う処理P1に加え、注視点Fに基づいて強調またはぼかし処理を行う処理P2を行う。そのため、表示装置300は、空領域Bsを自然界での実際の見え方に近づけることができるとともに、注視点Fの高精度な検出を行わずとも、対象物Obj1~Obj4について、自然界での実際の見え方に近い立体感が得ることができる。 <Main effects>
Thedisplay device 300 determines whether or not the blurring process can be performed on the sky region Bs, and performs the enhancement or blurring process based on the gazing point F in addition to the process P1 that performs the blurring process on the sky region Bs according to the determination result. P2 is performed. Therefore, the display device 300 can bring the sky region Bs closer to the actual appearance in the natural world, and the actual objects in the natural world can be obtained for the objects Obj1 to Obj4 without detecting the gaze point F with high accuracy. A three-dimensional appearance close to the appearance can be obtained.
表示装置300は、空領域Bsに対するぼかし処理の可否を判定し、判定結果に応じて空領域Bsに対してぼかし処理を行う処理P1に加え、注視点Fに基づいて強調またはぼかし処理を行う処理P2を行う。そのため、表示装置300は、空領域Bsを自然界での実際の見え方に近づけることができるとともに、注視点Fの高精度な検出を行わずとも、対象物Obj1~Obj4について、自然界での実際の見え方に近い立体感が得ることができる。 <Main effects>
The
<変形例1>
上記では、画像生成部14bは、処理P2において、画像IMG1に対して、注目領域TAの少なくとも一部分において強調処理を行うとともに、非注目領域の少なくとも一部分においてぼかし処理を行うものとして説明しているが、これに限られない。 <Modification 1>
In the above description, theimage generation unit 14b is described as performing the enhancement process on at least a part of the attention area TA and the blurring process on at least a part of the non-attention area for the image IMG1 in the process P2. Not limited to this.
上記では、画像生成部14bは、処理P2において、画像IMG1に対して、注目領域TAの少なくとも一部分において強調処理を行うとともに、非注目領域の少なくとも一部分においてぼかし処理を行うものとして説明しているが、これに限られない。 <
In the above description, the
つまり、画像生成部14bは、処理P2において、強調処理およびぼかし処理の両方を行う必要は必ずしもなく、注目領域TA(非処理領域)の少なくとも一部分において強調処理のみを行う構成であってもよい。すなわち、画像生成部14bにおいて、非注目領域の少なくとも一部分においてぼかし処理を行う必要は必ずしもない。または、画像生成部14bは、処理P2において、非注目領域の少なくとも一部分においてぼかし処理のみを行ってもよい。すなわち、画像生成部14bにおいて、注目領域TAの少なくとも一部分における強調処理は行われなくともよい。
That is, the image generation unit 14b does not necessarily need to perform both the enhancement process and the blurring process in the process P2, and may be configured to perform only the enhancement process in at least a part of the attention area TA (non-processing area). That is, it is not always necessary to perform the blurring process on at least a part of the non-target area in the image generation unit 14b. Alternatively, the image generation unit 14b may perform only the blurring process in at least a part of the non-target area in the process P2. In other words, the image generation unit 14b may not perform the enhancement process on at least a part of the attention area TA.
<変形例2>
上記では、表示装置300は、制御部10b、表示部50および注視点検出センサ60を備えているものとして説明したが、これに限らず、制御部10b、表示部50を備える表示装置、および注視点検出センサ60は、別体に構成されていてもよい。例えば、表示部50を有する表示装置と通信可能に接続できる外部装置(画像処理装置)が、制御部10bの画像処理機能を有していてもよい。また、注視点検出センサ60は、制御部10bを備える表示装置300または上記外部装置と通信可能に接続できればよい。 <Modification 2>
In the above description, thedisplay device 300 has been described as including the control unit 10b, the display unit 50, and the gazing point detection sensor 60. However, the display device 300 is not limited thereto, and the display device 300 includes the control unit 10b, the display unit 50, and the note. The viewpoint detection sensor 60 may be configured separately. For example, an external device (image processing device) that can be communicably connected to a display device having the display unit 50 may have the image processing function of the control unit 10b. The gazing point detection sensor 60 only needs to be communicably connected to the display device 300 including the control unit 10b or the external device.
上記では、表示装置300は、制御部10b、表示部50および注視点検出センサ60を備えているものとして説明したが、これに限らず、制御部10b、表示部50を備える表示装置、および注視点検出センサ60は、別体に構成されていてもよい。例えば、表示部50を有する表示装置と通信可能に接続できる外部装置(画像処理装置)が、制御部10bの画像処理機能を有していてもよい。また、注視点検出センサ60は、制御部10bを備える表示装置300または上記外部装置と通信可能に接続できればよい。 <
In the above description, the
上記変形例1および2については、後述の各実施形態の表示装置400・500においても同様のことがいえる。
The same can be said for the modification examples 1 and 2 in the display devices 400 and 500 of the embodiments described later.
〔実施形態4〕
本開示の実施形態4について、図16~図19に基づいて説明すれば、以下の通りである。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 4]
Embodiment 4 of the present disclosure will be described below with reference to FIGS. 16 to 19. For convenience of explanation, members having the same functions as those described in the above embodiment are denoted by the same reference numerals and description thereof is omitted.
本開示の実施形態4について、図16~図19に基づいて説明すれば、以下の通りである。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。 [Embodiment 4]
<表示装置400の構成>
まず、図16~図18を用いて、本実施形態の表示装置400について説明する。図16は、表示装置400の構成の一例を示す図である。図17は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図18は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration ofDisplay Device 400>
First, thedisplay device 400 according to the present embodiment will be described with reference to FIGS. FIG. 16 is a diagram illustrating an example of the configuration of the display device 400. FIG. 17 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions A1 to A3 formed in the image IMG1. FIG. 18 is a diagram illustrating an example of the processed image IMG2 displayed on the display unit 50.
まず、図16~図18を用いて、本実施形態の表示装置400について説明する。図16は、表示装置400の構成の一例を示す図である。図17は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域A1~A3の一例を示す図である。図18は、表示部50に表示される処理後画像IMG2の一例を示す図である。 <Configuration of
First, the
図16に示すように、表示装置400は、画像を表示するものであり、制御部10c(画像処理装置)、表示部50、注視点検出センサ60、および記憶部70を備えている。
As shown in FIG. 16, the display device 400 displays an image, and includes a control unit 10c (image processing device), a display unit 50, a gazing point detection sensor 60, and a storage unit 70.
制御部10cは、表示装置400を統括的に制御するものである。本実施形態では特に、制御部10cは、図17に示す画像IMG1に対して所定の処理を行う画像処理機能を有するものであり、関係特定部11、空領域特定部12、実行可否判定部13、画像生成部14c、注目領域特定部31およびエッジ検出部42を備えている。すなわち、制御部10cは、画像生成部14cおよびエッジ検出部42を備えている点で、制御部10bとは異なる。
The control unit 10c controls the display device 400 in an integrated manner. Particularly in the present embodiment, the control unit 10c has an image processing function for performing a predetermined process on the image IMG1 illustrated in FIG. 17. The relationship specifying unit 11, the empty region specifying unit 12, and the execution feasibility determining unit 13. The image generating unit 14c, the attention area specifying unit 31, and the edge detecting unit 42 are provided. That is, the control unit 10c is different from the control unit 10b in that it includes an image generation unit 14c and an edge detection unit 42.
エッジ検出部42は、記憶部70に格納された画像データを読み出し、実施形態3の対象物検出部32と同様の手法にて、当該画像データが示す画像IMG1におけるエッジを検出し、検出結果を画像生成部14cに送信する。エッジ検出部42は、対象物検出部32とは異なり、画像IMG1に対してエッジの検出のみを行い、画像IMG1に含まれる対象物の検出を行わない。
The edge detection unit 42 reads the image data stored in the storage unit 70, detects an edge in the image IMG1 indicated by the image data by the same method as that of the object detection unit 32 of the third embodiment, and obtains the detection result. It transmits to the image generation part 14c. Unlike the object detection unit 32, the edge detection unit 42 performs only edge detection on the image IMG1, and does not detect the object included in the image IMG1.
画像生成部14cは、上述した処理P1に加えて、画像IMG1に対して、注目領域特定部31によって特定された注目領域TAの全体において強調処理を行い、かつ、非注目領域の全体においてぼかし処理を行う(処理P3)。画像生成部14cは、画像生成部14bと同様、処理P1にて生成される画像IAと、処理P3にて生成される画像ICとを合成することにより、図18に示すような処理後画像IMG2を生成する。
In addition to the process P1 described above, the image generation unit 14c performs an enhancement process on the image IMG1 over the entire attention area TA identified by the attention area identification unit 31, and performs a blurring process on the entire non-attention area. (Process P3). Similarly to the image generation unit 14b, the image generation unit 14c combines the image IA generated in the process P1 and the image IC generated in the process P3, thereby processing the post-process image IMG2 as shown in FIG. Is generated.
つまり、本実施形態の画像生成部14cは、処理P3において、分割領域A1~A3のいずれの分割領域が注目領域TAであるかどうかを判定して、分割領域A1~A3毎に強調処理またはぼかし処理を行う。
That is, in the process P3, the image generation unit 14c according to the present embodiment determines which of the divided areas A1 to A3 is the attention area TA, and performs enhancement processing or blurring for each of the divided areas A1 to A3. Process.
図17の例では、画像生成部14cは、処理P3において、領域特定データを取得することで、分割領域A2が注目領域TAであり、分割領域A1およびA3が非注目領域であると特定する。画像生成部14cは、エッジ検出部42の検出結果に基づいて、注目領域TAである分割領域A2内に含まれるエッジについては強調処理を行い、非注目領域である分割領域A1およびA3内に含まれるエッジについてはぼかし処理を行うことにより、画像ICを生成する。画像生成部14cは、空領域Bsに対してぼかし処理を行った画像IAと、画像ICとを合成することにより、図18に示す処理後画像IMG2を生成する。当該処理後画像IMG2では、注目領域TAのうち上記非処理領域に含まれるエッジ(上記(i)および(ii)のエッジ以外のエッジ)のみ強調され、それ以外のエッジはぼかされる。
In the example of FIG. 17, in the process P3, the image generation unit 14c acquires the area specifying data, and specifies that the divided area A2 is the attention area TA and the divided areas A1 and A3 are the non-attention areas. Based on the detection result of the edge detection unit 42, the image generation unit 14c performs enhancement processing on the edges included in the divided area A2 that is the attention area TA, and is included in the divided areas A1 and A3 that are non-target areas. An image IC is generated by performing a blurring process on the edge to be processed. The image generation unit 14c generates the processed image IMG2 illustrated in FIG. 18 by combining the image IA obtained by performing the blurring process on the sky region Bs and the image IC. In the post-processing image IMG2, only edges (edges other than the edges (i) and (ii) above) included in the non-processed area of the attention area TA are emphasized, and other edges are blurred.
<表示装置400における処理>
次に、図19を用いて、表示装置400における処理(画像処理装置の制御方法)の一例について説明する。図19は、表示装置400における処理の一例を示すフローチャートである。 <Processing inDisplay Device 400>
Next, an example of processing in the display device 400 (control method of the image processing device) will be described with reference to FIG. FIG. 19 is a flowchart illustrating an example of processing in thedisplay device 400.
次に、図19を用いて、表示装置400における処理(画像処理装置の制御方法)の一例について説明する。図19は、表示装置400における処理の一例を示すフローチャートである。 <Processing in
Next, an example of processing in the display device 400 (control method of the image processing device) will be described with reference to FIG. FIG. 19 is a flowchart illustrating an example of processing in the
S23の処理後、画像生成部14cは、エッジ検出部42の検出結果に基づき、画像ICを生成する。具体的には、画像生成部14cは、画像IMG1に対して、注目領域TAの全体において強調処理を行うとともに(S34)、非注目領域の全体においてぼかし処理を行う(S35)。画像生成部14cは、SAの処理によって生成された画像IAと、S21~23、S34およびS35の処理によって生成された画像ICとを合成することにより、処理後画像IMG2を生成する(S28)。
After the processing of S23, the image generation unit 14c generates an image IC based on the detection result of the edge detection unit 42. Specifically, the image generation unit 14c performs enhancement processing on the image IMG1 in the entire attention area TA (S34) and blurring processing in the entire non-target area (S35). The image generation unit 14c generates the processed image IMG2 by synthesizing the image IA generated by the SA process and the image IC generated by the processes of S21 to 23, S34, and S35 (S28).
なお、S21の処理と、S22およびS23の処理とは並行して行われてもよいし、S22およびS23の処理後にS21の処理が行われてもよい。また、S34およびS35の処理についても、並行して行われてもよいし、S35の処理後にS34の処理が行われてもよい。また、SAの処理後にS21~23、S34およびS35の処理が行われてもよいし、S21~23、S34およびS35の処理後にSAの処理が行われてもよい。
In addition, the process of S21 and the process of S22 and S23 may be performed in parallel, and the process of S21 may be performed after the process of S22 and S23. Also, the processes of S34 and S35 may be performed in parallel, or the process of S34 may be performed after the process of S35. Further, the processing of S21 to 23, S34 and S35 may be performed after the processing of SA, or the processing of SA may be performed after the processing of S21 to 23, S34 and S35.
<主な効果>
表示装置400は、処理P3において、対象物Obj1~Obj4の特定を行うことなく、注目領域TAについては強調処理を行い、非注目領域についてぼかし処理を行う。換言すれば、画像生成部14cは、注目領域TAについては上記非処理領域の全体における強調処理を行う。そのため、表示装置400は、実施形態3の処理P2よりも簡易な処理にて画像ICを生成することができる。すなわち、表示装置400は、表示装置300に比べより簡易な処理にて、処理後画像IMG2を生成することができる。 <Main effects>
In the process P3, thedisplay device 400 performs the enhancement process on the attention area TA and performs the blurring process on the non-attention area without specifying the objects Obj1 to Obj4. In other words, the image generation unit 14c performs enhancement processing on the entire non-processing area for the attention area TA. Therefore, the display device 400 can generate the image IC by a simpler process than the process P2 of the third embodiment. That is, the display device 400 can generate the processed image IMG2 by a simpler process than the display device 300.
表示装置400は、処理P3において、対象物Obj1~Obj4の特定を行うことなく、注目領域TAについては強調処理を行い、非注目領域についてぼかし処理を行う。換言すれば、画像生成部14cは、注目領域TAについては上記非処理領域の全体における強調処理を行う。そのため、表示装置400は、実施形態3の処理P2よりも簡易な処理にて画像ICを生成することができる。すなわち、表示装置400は、表示装置300に比べより簡易な処理にて、処理後画像IMG2を生成することができる。 <Main effects>
In the process P3, the
<変形例>
制御部10cは、特定した注目領域TAについてのみ、ぼかし処理の対象となる空領域Bsがあるか否かを判定し、判定結果に応じて空領域Bsに対してぼかし処理を行ってもよい。この場合、関係特定部11、空領域特定部12および実行可否判定部13は、注目領域特定部31が特定した注目領域TAを処理対象とする。画像生成部14cは、非注目領域についてはぼかし処理を行い、注目領域TAについては、空領域Bsがぼかし処理の対象と判定された場合には、空領域Bsに対してぼかし処理を行い、非処理領域については強調処理を行うことにより、処理後画像IMG2を生成する。 <Modification>
Thecontrol unit 10c may determine whether or not there is a sky region Bs that is subject to blurring processing only for the identified attention area TA, and may perform blurring processing on the sky region Bs according to the determination result. In this case, the relationship specifying unit 11, the empty region specifying unit 12, and the execution feasibility determining unit 13 set the target region TA specified by the target region specifying unit 31 as a processing target. The image generation unit 14c performs a blurring process on the non-attention area, and for the attention area TA, when it is determined that the sky area Bs is a target of the blurring process, the image generation unit 14c performs a blurring process on the sky area Bs. A post-processing image IMG2 is generated by performing enhancement processing for the processing region.
制御部10cは、特定した注目領域TAについてのみ、ぼかし処理の対象となる空領域Bsがあるか否かを判定し、判定結果に応じて空領域Bsに対してぼかし処理を行ってもよい。この場合、関係特定部11、空領域特定部12および実行可否判定部13は、注目領域特定部31が特定した注目領域TAを処理対象とする。画像生成部14cは、非注目領域についてはぼかし処理を行い、注目領域TAについては、空領域Bsがぼかし処理の対象と判定された場合には、空領域Bsに対してぼかし処理を行い、非処理領域については強調処理を行うことにより、処理後画像IMG2を生成する。 <Modification>
The
上記処理P3においては、非注目領域についてはぼかし処理が行われる。そのため、上記処理P1の結果が優先適用されたとしても、非注目領域については空領域Bsの有無にかかわらず、ぼかし処理が行われる。すなわち、処理P1およびP3では、非注目領域に空領域Bsがある場合については、ぼかし処理が重複する場合がある。上記のように、空領域Bsに対するぼかし処理を行うか否かの判定を、非注目領域に対しては行わず、注目領域TAについてのみ行う場合には、上記非注目領域に対するぼかし処理の重複を回避することができる。すなわち、画像IMG1においてぼかし処理を行う領域の判定(特定)を効率的に行うことができ、処理の簡素化を図ることが可能となる。
In the process P3, the blurring process is performed on the non-attention area. For this reason, even if the result of the process P1 is preferentially applied, the blurring process is performed on the non-attention area regardless of the presence or absence of the empty area Bs. That is, in the processes P1 and P3, the blurring process may overlap when there is an empty area Bs in the non-target area. As described above, when the determination as to whether or not to perform the blurring process on the sky region Bs is not performed on the non-attention region but only on the attention region TA, the blurring processing overlaps on the non-attention region. It can be avoided. That is, it is possible to efficiently determine (specify) a region to be subjected to the blurring process in the image IMG1, and to simplify the process.
また、制御するバックライト(不図示)を、ぼかし処理を行う上記領域と一致させてもよい。このように、上記領域は、画像IMG1が有する画像効果、表示装置400のシステムまたは機構上の要請等もあわせて自由度を持たせて設定することも可能である。
Further, a backlight (not shown) to be controlled may be made to coincide with the above-described area where the blurring process is performed. As described above, the area can be set with a degree of freedom in accordance with the image effect of the image IMG1 and the system or mechanism requirements of the display device 400.
〔実施形態5〕
本開示の実施形態5について、図20~図22に基づいて説明すれば、以下の通りである。 [Embodiment 5]
The fifth embodiment of the present disclosure will be described below with reference to FIGS. 20 to 22.
本開示の実施形態5について、図20~図22に基づいて説明すれば、以下の通りである。 [Embodiment 5]
The fifth embodiment of the present disclosure will be described below with reference to FIGS. 20 to 22.
図20は、表示部50の表示対象となる画像IMG1、および画像IMG1に形成される複数の分割領域C1~C9の一例を示す図である。表示装置500では、9つの分割領域C1~C9は、画像IMG1の幅Hがおよそ9等分になるように設定されている。すなわち、分割領域C1~C9の幅hc(上下方向の長さ)は、いずれもhc≒H/9である。
FIG. 20 is a diagram illustrating an example of the image IMG1 to be displayed on the display unit 50 and a plurality of divided regions C1 to C9 formed in the image IMG1. In the display device 500, the nine divided regions C1 to C9 are set so that the width H of the image IMG1 is approximately equal to nine. That is, the widths hc (lengths in the vertical direction) of the divided regions C1 to C9 are all hc≈H / 9.
図21および図22はそれぞれ、表示装置500の処理の例を説明するための図である。図21では、注視点Fは分割領域C4内に存在している。また、図22では、注視点Fは分割領域C8内に存在している。このように、図21および図22では、注視点Fの位置がそれぞれ異なる場合が示されている。
FIG. 21 and FIG. 22 are diagrams for explaining an example of processing of the display device 500, respectively. In FIG. 21, the gazing point F exists in the divided area C4. In FIG. 22, the gazing point F exists in the divided area C8. As described above, FIGS. 21 and 22 show cases where the positions of the gazing point F are different from each other.
表示装置500において、注目領域特定部31は、実施形態3・4と同様に、注視点Fに基づき、注目領域TCを特定する。一例として、図21の場合には、注目領域特定部31は、分割領域C4を注目領域TCとして特定する。また、図22の場合には、注目領域特定部31は、分割領域C8を注目領域TCとして特定する。
In the display device 500, the attention area specifying unit 31 specifies the attention area TC based on the gazing point F as in the third and fourth embodiments. As an example, in the case of FIG. 21, the attention area specifying unit 31 specifies the divided area C4 as the attention area TC. In the case of FIG. 22, the attention area specifying unit 31 specifies the divided area C8 as the attention area TC.
続いて、表示装置500において、注目領域特定部31は、分割領域C1~C9のうち、注目領域TCの近傍に位置する分割領域である近傍領域NCをさらに特定する。一例として、注目領域特定部31は、上下方向において注目領域TCに隣接した分割領域を、近傍領域NCとして特定する。
Subsequently, in the display device 500, the attention area specifying unit 31 further specifies a neighboring area NC that is a divided area located in the vicinity of the attention area TC among the divided areas C1 to C9. As an example, the attention area specifying unit 31 specifies a divided area adjacent to the attention area TC in the vertical direction as the neighboring area NC.
なお、複数の近傍領域NCの区別のため、注目領域TCの上側の近傍領域NCを、上側近傍領域NUとも称する。また、注目領域TCの下側の近傍領域NCを、下側近傍領域NLとも称する。
In addition, in order to distinguish a plurality of neighboring areas NC, the neighboring area NC above the attention area TC is also referred to as an upper neighboring area NU. Further, the lower neighboring area NC on the attention area TC is also referred to as a lower neighboring area NL.
図21の場合には、注目領域特定部31は、分割領域C3を上側近傍領域NUとして特定するとともに、分割領域C5を下側近傍領域NLとして特定する。すなわち、注目領域特定部31は、分割領域C3およびC5を、注目領域TC(分割領域C4)に対する近傍領域NCとして特定する。
In the case of FIG. 21, the attention area specifying unit 31 specifies the divided area C3 as the upper vicinity area NU and specifies the divided area C5 as the lower vicinity area NL. That is, the attention area specifying unit 31 specifies the divided areas C3 and C5 as neighboring areas NC with respect to the attention area TC (divided area C4).
また、図22の場合には、注目領域特定部31は、分割領域C7を上側近傍領域NUとして特定するとともに、分割領域C9を下側近傍領域NLとして特定する。すなわち、注目領域特定部31は、分割領域C7およびC9を、注目領域TC(分割領域C8)に対する近傍領域NCとして特定する。
Further, in the case of FIG. 22, the attention area specifying unit 31 specifies the divided area C7 as the upper vicinity area NU and specifies the divided area C9 as the lower vicinity area NL. That is, the attention area specifying unit 31 specifies the divided areas C7 and C9 as neighboring areas NC with respect to the attention area TC (divided area C8).
続いて、表示装置500において、注目領域特定部31は、すでに特定した注目領域TCおよび近傍領域NCを、新たな注目領域として設定(特定)する。以降、注目領域TCとの区別のため、注目領域特定部31によって新たに設定された注目領域を、新注目領域TC2と称する。
Subsequently, in the display device 500, the attention area specifying unit 31 sets (specifies) the attention area TC and the neighboring area NC that have already been specified as new attention areas. Hereinafter, in order to distinguish from the attention area TC, the attention area newly set by the attention area specifying unit 31 is referred to as a new attention area TC2.
図21の場合には、注目領域特定部31は、注目領域TCである分割領域C4と、近傍領域NCである分割領域C3・C5とを、新注目領域TC2として設定する。つまり、注目領域特定部31は、3つの分割領域C3~C5を新注目領域TC2として、新注目領域TC2を除いた6つの分割領域(分割領域C1~C2およびC6~C9)を非注目領域として、それぞれ設定する。
In the case of FIG. 21, the attention area specifying unit 31 sets the divided area C4 as the attention area TC and the divided areas C3 and C5 as the neighboring areas NC as the new attention area TC2. That is, the attention area specifying unit 31 sets the three divided areas C3 to C5 as the new attention area TC2, and the six divided areas (the divided areas C1 to C2 and C6 to C9) excluding the new attention area TC2 as the non- attention areas. Set each.
また、図22の場合には、注目領域特定部31は、注目領域TCである分割領域C8と、近傍領域NCである分割領域C7・C9とを、新注目領域TC2として設定する。つまり、注目領域特定部31は、3つの分割領域C7~C9を新注目領域TC2として、新注目領域TC2を除いた6つの分割領域(分割領域C1~C6)を非注目領域として、それぞれ設定する。
In the case of FIG. 22, the attention area specifying unit 31 sets the divided area C8 as the attention area TC and the divided areas C7 and C9 as the neighboring areas NC as the new attention area TC2. That is, the attention area specifying unit 31 sets the three divided areas C7 to C9 as the new attention area TC2 and the six divided areas (the divided areas C1 to C6) excluding the new attention area TC2 as the non- attention areas, respectively. .
続いて、表示装置500において、実施形態3・4と同様、上述した処理P1と、処理P2またはP3とを行い、これらの処理結果を合成することにより、処理後画像IMG2を生成する。
Subsequently, in the display device 500, the processing P1 and the processing P2 or P3 described above are performed as in the third and fourth embodiments, and the processed image IMG2 is generated by combining these processing results.
一般的に、注視点Fが分割領域C1~C9の境界付近にある場合には、注視点Fの移動に伴って、注目領域TCが変化する可能性が高い。このため、処理後画像IMG2において、上記境界付近が鑑賞者90にとって不自然に見えてしまう可能性が懸念される。また、このことは、分割領域の個数が少ない場合において顕著となる。
Generally, when the gazing point F is in the vicinity of the boundary between the divided areas C1 to C9, the attention area TC is likely to change as the gazing point F moves. For this reason, in the processed image IMG2, there is a concern that the vicinity of the boundary may appear unnatural to the viewer 90. This is also remarkable when the number of divided areas is small.
この点を踏まえ、表示装置500は、分割領域の個数を、上述の表示装置300・400よりも多く設けるとともに、注目領域TCに、当該注目領域TCの近傍領域NCをマージンとして付加して、新注目領域TC2を設定するように構成されている。
In consideration of this point, the display device 500 is provided with a larger number of divided regions than the display devices 300 and 400 described above, and adds a region NC near the region of interest TC as a margin to the region of interest TC. The region of interest TC2 is set.
これにより、注目領域TCに加えて、近傍領域NCにおいても、上記非処理領域については強調処理を行うことができるので、注視点Fが移動した場合であっても、注目領域TCの境界付近が、鑑賞者90にとって不自然に見えてしまう可能性を低減できる。注視点Fが注目領域TCの境界付近にある状態において、当該注視点Fが当該注目領域TCの外部へと移動した場合には、移動後の注視点Fは近傍領域NCの内部に存在することが期待されるためである。
As a result, in addition to the attention area TC, in the neighboring area NC, the enhancement process can be performed for the non-processing area. Therefore, even when the gazing point F moves, the vicinity of the boundary of the attention area TC The possibility that the viewer 90 looks unnatural can be reduced. In a state where the gazing point F is in the vicinity of the boundary of the attention area TC, when the gazing point F moves outside the attention area TC, the moved gazing point F exists inside the neighboring area NC. This is because it is expected.
このように、表示装置500によれば、表示装置300・400よりも広い注目領域(新注目領域TC2)に含まれる上記非処理領域を強調処理の対象とすることができるので、より違和感のない処理後画像IMG2を鑑賞者90に提供することが可能となる。
As described above, according to the display device 500, the non-processed region included in the attention region (new attention region TC2) wider than the display devices 300 and 400 can be the target of the enhancement processing, and thus there is no sense of incongruity. It is possible to provide the viewer 90 with the processed image IMG2.
なお、本実施形態では、分割領域の個数が9つである場合を例示したが、分割領域の個数はこれに限定されず、5つ以上であればよい。
In the present embodiment, the case where the number of divided areas is nine is exemplified, but the number of divided areas is not limited to this, and may be five or more.
また、本実施形態では、上下方向において注目領域TCに隣接した分割領域を、近傍領域NCとして特定する場合を例示したが、近傍領域NCを特定する方法はこれに限定されない。
In the present embodiment, the case where the divided area adjacent to the attention area TC in the vertical direction is specified as the neighboring area NC is exemplified, but the method of specifying the neighboring area NC is not limited to this.
すなわち、注目領域特定部31は、注目領域TCから上方向に見てN1個までの分割領域を上側近傍領域NUとして特定するとともに、当該注目領域TCから下方向に見てN2個までの分割領域を下側近傍領域NUとして特定してもよい。そして、注目領域特定部31は、N1個の分割領域から成る上側近傍領域NUと、N2個の分割領域から成る下側近傍領域NLとを、近傍領域NCとして特定してよい。ここで、N1およびN2は自然数である。上述の図21および図22は、N1=N2=1である場合に相当する。
That is, the attention area specifying unit 31 specifies up to N1 divided areas when viewed upward from the attention area TC as the upper neighboring area NU, and up to N2 divided areas when viewed downward from the attention area TC. May be specified as the lower vicinity region NU. The attention area specifying unit 31 may specify the upper neighboring area NU composed of N1 divided areas and the lower neighboring area NL composed of N2 divided areas as the neighboring area NC. Here, N1 and N2 are natural numbers. 21 and 22 described above correspond to the case where N1 = N2 = 1.
N1およびN2の値は、分割領域の個数に応じて、表示装置500の設計者によってあらかじめ設定されてよい。つまり、注目領域TCからどの程度の範囲が「近傍」であるかは、表示装置500の設計者によって適宜決定されてよい。また、N1およびN2の値は、表示装置500のユーザによって変更可能であるように設定されていてもよい。
The values of N1 and N2 may be set in advance by the designer of the display device 500 according to the number of divided areas. That is, the extent of the “nearby” range from the attention area TC may be determined as appropriate by the designer of the display device 500. Further, the values of N1 and N2 may be set so as to be changeable by the user of the display device 500.
〔ソフトウェアによる実現例〕
表示装置100・100a・200・300・400・500の制御ブロック(特に制御部10・10a・10b・10c)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Example of software implementation]
The control blocks (particularly the control units 10, 10a, 10b, and 10c) of the display devices 100, 100a, 200, 300, 400, and 500 are realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like. Alternatively, it may be realized by software using a CPU (Central Processing Unit).
表示装置100・100a・200・300・400・500の制御ブロック(特に制御部10・10a・10b・10c)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 [Example of software implementation]
The control blocks (particularly the
後者の場合、表示装置100・100a・200・300・400・500は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラムおよび各種データがコンピュータ(またはCPU)で読み取り可能に記録されたROM(ReadOnly Memory)または記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(またはCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本開示の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本開示の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。
In the latter case, the display devices 100, 100a, 200, 300, 400, and 500 are configured such that a CPU that executes instructions of a program that is software that realizes each function, and the program and various data can be read by a computer (or CPU) A ROM (Read Only Memory) or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided. And the objective of this indication is achieved when a computer (or CPU) reads and runs the said program from the said recording medium. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. Note that one aspect of the present disclosure can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
〔まとめ〕
本開示の態様1に係る画像処理装置(制御部10、10a、10b、10c)は、表示対象画像(画像IMG1)において空に相当する空領域(Bs、Bs1、Bs2、Bsn)を特定する空領域特定部(12、12a)と、上記空領域特定部によって特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定部(13、13a)と、上記実行可否判定部によって上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像(IMG2)を生成する画像生成部(14、14b、14c)と、を備えている。 [Summary]
The image processing apparatus ( control units 10, 10a, 10b, and 10c) according to aspect 1 of the present disclosure specifies a sky area (Bs, Bs1, Bs2, Bsn) that corresponds to the sky in the display target image (image IMG1). Based on the color of the sky region specified by the region specifying unit (12, 12a) and the sky region specifying unit, an execution feasibility determining unit (13, 13a) for determining whether to perform the blurring process on the sky region. ) And an image generation unit (14, 14b, which generates a post-processing image (IMG2) by performing a blurring process on the sky area when the execution determination unit determines that the blurring process is to be performed. 14c).
本開示の態様1に係る画像処理装置(制御部10、10a、10b、10c)は、表示対象画像(画像IMG1)において空に相当する空領域(Bs、Bs1、Bs2、Bsn)を特定する空領域特定部(12、12a)と、上記空領域特定部によって特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定部(13、13a)と、上記実行可否判定部によって上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像(IMG2)を生成する画像生成部(14、14b、14c)と、を備えている。 [Summary]
The image processing apparatus (
上記の構成によれば、空領域の色に基づいて空領域に対してぼかし処理を行うか否かを判定し、その判定結果に基づいて空領域に対してぼかし処理を行う。例えば、空領域が特定の色の場合、当該空領域は面色である可能性が高い。面色である空領域に対してぼかし処理を行うことにより、面色特有の見え方、すなわち当該空領域を心理的な見え方に近づけることが可能となる。
According to the above configuration, it is determined whether or not to perform the blurring process on the sky area based on the color of the sky area, and the blurring process is performed on the sky area based on the determination result. For example, when the sky region has a specific color, it is highly likely that the sky region is a surface color. By performing the blurring process on the sky region that is the surface color, it is possible to bring the appearance unique to the surface color, that is, the sky region closer to a psychological appearance.
それゆえ、本開示の一態様に係る画像処理装置によれば、自然界での実際の見え方に近い空領域を含む処理後画像を生成することができる。
Therefore, according to the image processing device according to one aspect of the present disclosure, it is possible to generate a post-processing image including a sky region that is close to the actual appearance in the natural world.
さらに、本開示の態様2に係る画像処理装置では、態様1において、上記空領域特定部は、上記表示対象画像に遠景領域が含まれている場合に、当該遠景領域を上記空領域として特定してもよい。
Furthermore, in the image processing apparatus according to aspect 2 of the present disclosure, in the aspect 1, the sky area specifying unit specifies the distant view area as the sky area when the display target image includes a distant view area. May be.
上記の構成によれば、表示対象画像に含まれる遠景領域を空領域として特定することができる。
According to the above configuration, the distant view area included in the display target image can be specified as the sky area.
さらに、本開示の態様3に係る画像処理装置(10)は、態様2において、上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域(A1~A3、B1~B6、C1~C9)のそれぞれについて、当該表示対象画像を構成する複数の画素のそれぞれの明度と当該明度を有する画素の数との関係を特定する関係特定部(11)を備え、上記空領域特定部(12)は、上記関係特定部が特定した上記複数の分割領域のそれぞれについての上記関係を表現したグラフ(明度分布)におけるピークが示す上記画素の数が、上記表示対象画像における上側に位置する分割領域から下側に位置する分割領域に向かって減少している分割領域の集合がある場合、または上記画素の数が上記表示対象画像において一定である場合に、上記ピークに対応する画素によって形成される領域を上記遠景領域として特定してもよい。
Further, the image processing apparatus (10) according to the third aspect of the present disclosure, in the second aspect, includes a plurality of divided regions (A1 to A3, B1 to B6, and the like) formed by dividing the display target image in the vertical direction. For each of C1 to C9), the sky region specifying unit includes a relationship specifying unit (11) for specifying the relationship between the brightness of each of the plurality of pixels constituting the display target image and the number of pixels having the brightness. In (12), the number of the pixels indicated by the peak in the graph (lightness distribution) expressing the relationship for each of the plurality of divided regions specified by the relationship specifying unit is positioned on the upper side in the display target image. When there is a set of divided areas decreasing from the divided area toward the divided area located below, or when the number of pixels is constant in the display target image The region formed by the pixels corresponding to the peak may be specified as the distant view area.
表示対象画像に遠景領域が含まれている場合、遠景領域は、一般に表示対象画像の下側よりも上側の方により多く含まれる。また一般に、空領域における上記グラフの幅は比較的狭い。これらの点を考慮すれば、上記のように上記ピークが示す画素の数が減少しているまたは一定である場合には、空領域としての遠景領域が表示対象画像に含まれている可能性が高い。
When the distant view area is included in the display target image, the distant view area is generally included more in the upper side than the lower side of the display target image. In general, the width of the graph in the sky region is relatively narrow. Considering these points, when the number of pixels indicated by the peak is reduced or constant as described above, a distant view area as a sky area may be included in the display target image. high.
上記の構成によれば、上記のように上記ピークが示す画素の数が減少しているまたは一定である場合に遠景領域が表示対象画像に含まれていると判定するので、当該判定を比較的容易に行うことができる。
According to the above configuration, when the number of pixels indicated by the peak is decreasing or constant as described above, it is determined that the distant view area is included in the display target image. It can be done easily.
さらに、本開示の態様4に係る画像処理装置では、態様3において、上記空領域特定部は、上記遠景領域を構成する画素を特定し、上記実行可否判定部(13)は、上記空領域特定部が特定した上記画素の色に特定の色が含まれ、かつ上記遠景領域における当該特定の色を有する画素の数が所定の数以上である場合に、上記空領域(Bs、Bs1、Bs2)に対するぼかし処理を行うと判定してもよい。
Furthermore, in the image processing apparatus according to aspect 4 of the present disclosure, in aspect 3, the sky area specifying unit specifies pixels constituting the distant view area, and the execution determination unit (13) is configured to specify the sky area specifying unit. The sky region (Bs, Bs1, Bs2) when the color of the pixel specified by the section includes a specific color and the number of pixels having the specific color in the distant view region is equal to or greater than a predetermined number It may be determined that the blurring process is performed on.
上記の構成によれば、特定の色の画素の数が所定の数以上である場合、遠景領域が特定の色を有する空領域であると判定し、当該空領域に対してぼかし処理を行うことができる。そのため、処理後画像において、例えば面色である空領域(例:青空)を、自然界での実際の見え方に近づけることができる。
According to the above configuration, when the number of pixels of a specific color is equal to or greater than a predetermined number, it is determined that the distant area is a sky area having a specific color, and blur processing is performed on the sky area. Can do. For this reason, in the processed image, for example, a sky region (for example, a blue sky) that is a surface color can be brought close to an actual appearance in the natural world.
さらに、本開示の態様5に係る画像処理装置(10a)は、態様2において、上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域のそれぞれについて、当該表示対象画像を構成する複数の画素の色のそれぞれを、複数の所定の色のいずれかに分類し、当該所定の色ごとに画素の数を計数する画素計数部(21)を備え、上記空領域特定部(12a)は、上記所定の色のうちのいずれかの色において、上記表示対象画像における上側に位置する分割領域(A1)から下側に位置する分割領域(A3)に向かって上記画素計数部が計数した画素の数が減少している分割領域の集合がある場合、または当該画素の数が上記表示対象画像において一定である場合に、当該数が減少しているまたは一定である上記画素によって形成される領域を上記遠景領域として特定してもよい。
Furthermore, the image processing apparatus (10a) according to aspect 5 of the present disclosure is configured to display the display target image for each of the plurality of divided regions formed by dividing the display target image in the vertical direction in aspect 2. Each of the colors of a plurality of constituting pixels is classified into one of a plurality of predetermined colors, and includes a pixel counting unit (21) that counts the number of pixels for each of the predetermined colors. 12a), in any one of the predetermined colors, the pixel counting unit moves from the upper divided area (A1) to the lower divided area (A3) in the display target image. When there is a set of divided areas where the number of counted pixels is decreased, or when the number of the pixels is constant in the display target image, the number of the pixels is decreased or constant Thus the region that is formed may be specified as the distant view area.
上述のとおり、遠景領域は、一般に表示対象画像の下側よりも上側の方により多く含まれる。また一般に、空領域の色分布は比較的狭い。これらの点を考慮すれば、上記のように所定の色のうちのいずれかの色に分類された画素の数が減少しているまたは一定である場合には、空領域としての遠景領域が表示対象画像に含まれている可能性が高い。
As described above, the distant view area is generally included more in the upper side than the lower side of the display target image. In general, the color distribution in the sky region is relatively narrow. Considering these points, when the number of pixels classified into any one of the predetermined colors as described above is reduced or constant, a distant view area as a sky area is displayed. There is a high possibility that it is included in the target image.
上記の構成によれば、上記のように所定の色のうちのいずれかの色に分類された画素の数が減少しているまたは一定である場合に、遠景領域が表示対象画像に含まれていると判定するので、当該判定を比較的容易に行うことができる。
According to the above configuration, when the number of pixels classified into any one of the predetermined colors as described above is reduced or constant, the distant view area is included in the display target image. Therefore, the determination can be performed relatively easily.
さらに、本開示の態様6に係る画像処理装置では、態様5において、上記実行可否判定部(13a)は、特定された上記遠景領域を構成する画素に対応する上記所定の色が特定の色である場合に、上記空領域に対するぼかし処理を行うと判定してもよい。
Further, in the image processing device according to aspect 6 of the present disclosure, in the aspect 5, the execution determination unit (13a) is configured such that the predetermined color corresponding to the pixels constituting the specified far-field region is a specific color. In some cases, it may be determined that the blurring process is performed on the sky region.
上記の構成によれば、特定の色を有する空領域としての遠景領域に対してぼかし処理を行うことができる。そのため、処理後画像において、例えば面色である空領域(例:青空)を、自然界での実際の見え方に近づけることができる。
According to the above configuration, the blurring process can be performed on a distant view area as a sky area having a specific color. For this reason, in the processed image, for example, a sky region (for example, a blue sky) that is a surface color can be brought close to an actual appearance in the natural world.
さらに、本開示の態様7に係る画像処理装置では、態様3または6において、上記実行可否判定部は、上記複数の分割領域のうち、上部の分割領域(A1またはA2)に含まれる空領域と、当該上部の分割領域に隣接する下部の分割領域(A2またはA3)に含まれる空領域との接触領域が所定量未満である場合、上記上部の分割領域に含まれる空領域のみぼかし処理を行う対象としてもよい。
Furthermore, in the image processing device according to aspect 7 of the present disclosure, in the aspect 3 or 6, the execution determination unit may include an empty area included in the upper divided area (A1 or A2) among the plurality of divided areas. When the contact area with the empty area included in the lower divided area (A2 or A3) adjacent to the upper divided area is less than a predetermined amount, only the empty area included in the upper divided area is blurred. It may be a target.
特定の色を有する空領域と判定された場合であっても、当該空領域が、実際の空に相当する領域ではなく、空とは異なる、特定の色を有する対象物を示す可能性もある。上記の構成によれば、接触領域が所定量未満である場合には、下部の分割領域に上記のような対象物が存在するものと判断し、当該対象物に対してはぼかし処理を行わないようにすることができる。そのため、実体を認識できる対象物にまで意図せずにぼかし処理を行うといった不具合が生じることを防止することができる。
Even if it is determined that the sky area has a specific color, the sky area may not be an area corresponding to the actual sky, but may indicate an object having a specific color that is different from the sky. . According to the above configuration, when the contact area is less than the predetermined amount, it is determined that the object as described above exists in the lower divided area, and the blurring process is not performed on the object. Can be. For this reason, it is possible to prevent a problem such as performing blurring processing unintentionally even for an object whose entity can be recognized.
さらに、本開示の態様8に係る画像処理装置では、態様3から7のいずれかにおいて、上記複数の分割領域のうち、上記下側に位置する分割領域(B5、B6)よりも上記上側に位置する分割領域(B1~B4)の方が、上記上下方向の長さが短くてもよい(幅ha<幅hb)。
Furthermore, in the image processing apparatus according to Aspect 8 of the present disclosure, in any one of Aspects 3 to 7, the image is positioned above the divided area (B5, B6) located on the lower side among the plurality of divided areas. The divided regions (B1 to B4) to be performed may be shorter in the vertical direction (width ha <width hb).
一般に空領域は、表示対象画像の上側に含まれていることが多い。そのため、上記の構成のように、上記上下方向の長さを、下側の分割領域よりも上側の分割領域の方を短くすることにより、空領域の特定を精度良く行うことが可能となる。
Generally, the empty area is often included above the display target image. For this reason, as in the configuration described above, it is possible to specify the empty region with high accuracy by shortening the length in the vertical direction in the upper divided region than in the lower divided region.
さらに、本開示の態様9に係る画像処理装置では、態様1から8のいずれかにおいて、上記表示対象画像を表示する表示面(表示部50)を有する表示装置(300、400、500)と、当該表示面における鑑賞者(90)の注視点(F)を検出するセンサ(注視点検出センサ60)とに、通信可能に接続でき、上記センサによって検出された注視点に基づき、上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域のうち、上記鑑賞者が注目している分割領域である注目領域(TA、TC、新注目領域TC2)を特定する注目領域特定部(31)を備え、上記画像生成部(14b、14c)は、上記注目領域のうち、上記ぼかし処理を行う空領域以外の非処理領域については、その少なくとも一部分に対して強調処理を行ってもよい。
Furthermore, in an image processing device according to aspect 9 of the present disclosure, in any one of aspects 1 to 8, a display device (300, 400, 500) having a display surface (display unit 50) for displaying the display target image; The display target image can be connected to a sensor (gaze point detection sensor 60) that detects the gaze point (F) of the viewer (90) on the display surface in a communicable manner and is based on the gaze point detected by the sensor. Of the plurality of divided areas formed by dividing the area vertically, the attention area specifying unit for specifying the attention area (TA, TC, new attention area TC2) that is the divided area that the viewer is paying attention to (31), and the image generation unit (14b, 14c) applies to at least a part of the non-process area other than the sky area to be subjected to the blurring process in the attention area. Enhancement processing may be performed.
上記の構成によれば、画像生成部は、注目領域特定部が注視点に基づいて注目領域を特定した結果に基づいて、非処理領域に対する強調処理を行うことができる。つまり、画像処理装置は、鑑賞者が表示面のどの分割領域を注視しているかを特定するだけで、非処理領域における強調処理が行われた箇所と、当該強調処理が行われないそれ以外の箇所との間に相対的な見え方の差を生じさせる処理後画像を生成できる。
According to the above configuration, the image generation unit can perform the enhancement processing on the non-processing region based on the result of the attention region specifying unit specifying the attention region based on the gazing point. In other words, the image processing apparatus simply identifies which divided area of the display surface the viewer is watching, and the portion where the emphasis process is performed in the non-process area and the other areas where the emphasis process is not performed. It is possible to generate a post-processing image that causes a relative difference in appearance between the portions.
このように、画像処理装置は、注視点の高精度な検出を行わずとも、自然界での実際の見え方に近い立体感が得られる処理後画像を生成できる。それゆえ、簡易な処理にて、上記立体感が得られる処理後画像を生成することが可能となる。
In this way, the image processing apparatus can generate a post-processed image that provides a stereoscopic effect close to the actual appearance in the natural world without performing high-precision detection of the point of sight. Therefore, it is possible to generate a post-processing image that provides the above three-dimensional effect with a simple process.
さらに、本開示の態様10に係る表示装置(100、100a、200、300、400、500)は、態様1から9のいずれかに記載の画像処理装置を備えている。
Furthermore, a display device (100, 100a, 200, 300, 400, 500) according to aspect 10 of the present disclosure includes the image processing device according to any one of aspects 1 to 9.
上記の構成によれば、本開示の一態様に係る画像処理装置と同様の効果を奏する。
According to the above configuration, the same effects as those of the image processing apparatus according to one aspect of the present disclosure are obtained.
さらに、本開示の態様11に係る画像処理装置の制御方法は、表示対象画像において空に相当する空領域を特定する空領域特定工程と、上記空領域特定工程において特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定工程と、上記実行可否判定工程において上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像を生成する画像生成工程と、を含んでいる。
Further, the control method of the image processing device according to the aspect 11 of the present disclosure includes a sky region specifying step of specifying a sky region corresponding to the sky in the display target image, and a color of the sky region specified in the sky region specifying step. Based on the execution possibility determination step for determining whether or not to perform the blurring process for the sky region, and the blur processing for the sky region when it is determined that the blurring process is to be performed in the execution permission determination step. An image generation step of generating a post-processing image by performing.
上記の方法によれば、本開示の一態様に係る画像処理装置と同様の効果を奏する。
According to the above method, the same effects as those of the image processing apparatus according to an aspect of the present disclosure are obtained.
本開示の各態様に係る画像処理装置は、コンピュータによって実現してもよく、この場合には、コンピュータを上記画像処理装置が備える各部(ソフトウェア要素)として動作させることにより上記画像処理装置をコンピュータにて実現させる画像処理装置の画像処理プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本開示の範疇に入る。
The image processing apparatus according to each aspect of the present disclosure may be realized by a computer. In this case, the image processing apparatus is operated on each computer by causing the computer to operate as each unit (software element) included in the image processing apparatus. An image processing program of an image processing apparatus to be realized in this manner and a computer-readable recording medium on which the image processing program is recorded also fall within the scope of the present disclosure.
〔付記事項〕
本開示は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 [Additional Notes]
The present disclosure is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims, and the embodiments can be obtained by appropriately combining technical means disclosed in different embodiments. Are also included in the technical scope of the present disclosure. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
本開示は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 [Additional Notes]
The present disclosure is not limited to the above-described embodiments, and various modifications can be made within the scope of the claims, and the embodiments can be obtained by appropriately combining technical means disclosed in different embodiments. Are also included in the technical scope of the present disclosure. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
〔関連出願の相互参照〕
本出願は、2016年12月6日に出願された日本国特許出願:特願2016-236910に対して優先権の利益を主張するものであり、それを参照することにより、その内容の全てが本書に含まれる。 [Cross-reference of related applications]
This application claims the benefit of priority to the Japanese patent application filed on Dec. 6, 2016: Japanese Patent Application No. 2016-236910. Included in this document.
本出願は、2016年12月6日に出願された日本国特許出願:特願2016-236910に対して優先権の利益を主張するものであり、それを参照することにより、その内容の全てが本書に含まれる。 [Cross-reference of related applications]
This application claims the benefit of priority to the Japanese patent application filed on Dec. 6, 2016: Japanese Patent Application No. 2016-236910. Included in this document.
10、10a、10b、10c 制御部(画像処理装置)
11 関係特定部
12、12a 空領域特定部
13、13a 実行可否判定部
14、14b、14c 画像生成部
21 画素計数部
31 注目領域特定部
50 表示部(表示面)
60 注視点検出センサ(センサ)
90 鑑賞者
100、100a、200、300、400、500 表示装置
A1~A3、B1~B6、C1~C9 分割領域
A1、B1~B4 分割領域(上側に位置する分割領域)
A3、B5、B6 分割領域(下側に位置する分割領域)
A1、A2 分割領域(上部の分割領域)
A2、A3 分割領域(下部の分割領域)
Bs、Bs1、Bs2、Bsn 空領域
F 注視点
IMG1 画像(表示対象画像)
IMG2 処理後画像
TA、TC 注目領域
TC2 新注目領域(注目領域)
h、ha、hb、hc 幅(上下方向の長さ) 10, 10a, 10b, 10c Control unit (image processing apparatus)
DESCRIPTION OFSYMBOLS 11 Relationship specific | specification part 12, 12a Sky area | region specific | specification part 13, 13a Executability determination part 14, 14b, 14c Image generation part 21 Pixel counting part 31 Attention area | region specific part 50 Display part (display surface)
60 Gaze point detection sensor (sensor)
90 Viewers 100, 100a, 200, 300, 400, 500 Display devices A1 to A3, B1 to B6, C1 to C9 Divided areas A1, B1 to B4 Divided areas (divided areas located on the upper side)
A3, B5, B6 division area (division area located on the lower side)
A1, A2 divided area (upper divided area)
A2, A3 divided area (lower divided area)
Bs, Bs1, Bs2, Bsn Empty area F Gaze point IMG1 image (display target image)
IMG2 Processed image TA, TC Attention area TC2 New attention area (attention area)
h, ha, hb, hc width (length in the vertical direction)
11 関係特定部
12、12a 空領域特定部
13、13a 実行可否判定部
14、14b、14c 画像生成部
21 画素計数部
31 注目領域特定部
50 表示部(表示面)
60 注視点検出センサ(センサ)
90 鑑賞者
100、100a、200、300、400、500 表示装置
A1~A3、B1~B6、C1~C9 分割領域
A1、B1~B4 分割領域(上側に位置する分割領域)
A3、B5、B6 分割領域(下側に位置する分割領域)
A1、A2 分割領域(上部の分割領域)
A2、A3 分割領域(下部の分割領域)
Bs、Bs1、Bs2、Bsn 空領域
F 注視点
IMG1 画像(表示対象画像)
IMG2 処理後画像
TA、TC 注目領域
TC2 新注目領域(注目領域)
h、ha、hb、hc 幅(上下方向の長さ) 10, 10a, 10b, 10c Control unit (image processing apparatus)
DESCRIPTION OF
60 Gaze point detection sensor (sensor)
90
A3, B5, B6 division area (division area located on the lower side)
A1, A2 divided area (upper divided area)
A2, A3 divided area (lower divided area)
Bs, Bs1, Bs2, Bsn Empty area F Gaze point IMG1 image (display target image)
IMG2 Processed image TA, TC Attention area TC2 New attention area (attention area)
h, ha, hb, hc width (length in the vertical direction)
Claims (12)
- 表示対象画像において空に相当する空領域を特定する空領域特定部と、
上記空領域特定部によって特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定部と、
上記実行可否判定部によって上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像を生成する画像生成部と、を備えていることを特徴とする画像処理装置。 An empty area specifying unit for specifying an empty area corresponding to the sky in the display target image;
An executability determination unit that determines whether or not to perform blurring processing on the sky region based on the color of the sky region specified by the sky region specifying unit;
An image generation unit configured to generate a post-processing image by performing a blurring process on the sky region when the execution determination unit determines that the blurring process is to be performed. Image processing device. - 上記空領域特定部は、上記表示対象画像に遠景領域が含まれている場合に、当該遠景領域を上記空領域として特定することを特徴とする請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the sky region specifying unit specifies the far view region as the sky region when the display target image includes a distant view region.
- 上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域のそれぞれについて、当該表示対象画像を構成する複数の画素のそれぞれの明度と当該明度を有する画素の数との関係を特定する関係特定部を備え、
上記空領域特定部は、上記関係特定部が特定した上記複数の分割領域のそれぞれについての上記関係を表現したグラフにおけるピークが示す上記画素の数が、上記表示対象画像における上側に位置する分割領域から下側に位置する分割領域に向かって減少している分割領域の集合がある場合、または上記画素の数が上記表示対象画像において一定である場合に、上記ピークに対応する画素によって形成される領域を上記遠景領域として特定することを特徴とする請求項2に記載の画像処理装置。 For each of a plurality of divided regions formed by dividing the display target image in the vertical direction, the relationship between the brightness of each of the plurality of pixels constituting the display target image and the number of pixels having the brightness. It has a relationship identification part to identify,
The empty region specifying unit is a divided region in which the number of the pixels indicated by the peak in the graph expressing the relationship for each of the plurality of divided regions specified by the relationship specifying unit is positioned above the display target image. Formed by the pixels corresponding to the peak when there is a set of divided regions decreasing toward the divided region located on the lower side or when the number of pixels is constant in the display target image The image processing apparatus according to claim 2, wherein an area is specified as the distant view area. - 上記空領域特定部は、上記遠景領域を構成する画素を特定し、
上記実行可否判定部は、上記空領域特定部が特定した上記画素の色に特定の色が含まれ、かつ上記遠景領域における当該特定の色を有する画素の数が所定の数以上である場合に、上記空領域に対するぼかし処理を行うと判定することを特徴とする請求項3に記載の画像処理装置。 The sky region specifying unit specifies pixels constituting the distant view region,
The execution determination unit determines that the color of the pixel specified by the sky region specifying unit includes a specific color and the number of pixels having the specific color in the distant view region is equal to or greater than a predetermined number. The image processing apparatus according to claim 3, wherein it is determined that a blurring process is performed on the sky area. - 上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域のそれぞれについて、当該表示対象画像を構成する複数の画素の色のそれぞれを、複数の所定の色のいずれかに分類し、当該所定の色ごとに画素の数を計数する画素計数部を備え、
上記空領域特定部は、上記所定の色のうちのいずれかの色において、上記表示対象画像における上側に位置する分割領域から下側に位置する分割領域に向かって上記画素計数部が計数した画素の数が減少している分割領域の集合がある場合、または当該画素の数が上記表示対象画像において一定である場合に、当該数が減少しているまたは一定である上記画素によって形成される領域を上記遠景領域として特定することを特徴とする請求項2に記載の画像処理装置。 For each of a plurality of divided regions formed by dividing the display target image in the vertical direction, each of the colors of the plurality of pixels constituting the display target image is classified into one of a plurality of predetermined colors. And a pixel counting unit that counts the number of pixels for each predetermined color,
The sky region specifying unit is a pixel counted by the pixel counting unit in any one of the predetermined colors from the upper divided region to the lower divided region in the display target image. When there is a set of divided regions where the number of pixels is reduced, or when the number of pixels is constant in the display target image, the region formed by the pixels where the number is reduced or constant The image processing apparatus according to claim 2, wherein the image processing device is specified as the distant view region. - 上記実行可否判定部は、特定された上記遠景領域を構成する画素に対応する上記所定の色が特定の色である場合に、上記空領域に対するぼかし処理を行うと判定することを特徴とする請求項5に記載の画像処理装置。 The execution feasibility determination unit determines to perform blurring processing on the sky region when the predetermined color corresponding to the pixels constituting the specified distant view region is a specific color. Item 6. The image processing apparatus according to Item 5.
- 上記実行可否判定部は、上記複数の分割領域のうち、上部の分割領域に含まれる空領域と、当該上部の分割領域に隣接する下部の分割領域に含まれる空領域との接触領域が所定量未満である場合、上記上部の分割領域に含まれる空領域のみぼかし処理を行う対象とすることを特徴とする請求項3または6に記載の画像処理装置。 The execution determination unit determines a predetermined amount of a contact area between an empty area included in an upper divided area and an empty area included in a lower divided area adjacent to the upper divided area among the plurality of divided areas. 7. The image processing apparatus according to claim 3, wherein if it is less than the range, only the sky region included in the upper divided region is subjected to blurring processing. 8.
- 上記複数の分割領域のうち、上記下側に位置する分割領域よりも上記上側に位置する分割領域の方が、上記上下方向の長さが短いことを特徴とする請求項3から7のいずれか1項に記載の画像処理装置。 The length of the up-down direction is shorter in the divided area located on the upper side than the divided area located on the lower side among the plurality of divided areas. The image processing apparatus according to item 1.
- 上記表示対象画像を表示する表示面を有する表示装置と、当該表示面における鑑賞者の注視点を検出するセンサとに、通信可能に接続でき、
上記センサによって検出された注視点に基づき、上記表示対象画像をその上下方向に分割することによって形成された複数の分割領域のうち、上記鑑賞者が注目している分割領域である注目領域を特定する注目領域特定部を備え、
上記画像生成部は、上記注目領域のうち、上記ぼかし処理を行う空領域以外の非処理領域については、その少なくとも一部分に対して強調処理を行うことを特徴とする請求項1から8のいずれか1項に記載の画像処理装置。 The display device having a display surface for displaying the display target image and a sensor for detecting a viewer's gaze point on the display surface can be communicably connected,
Based on the gazing point detected by the sensor, the region of interest that is the region of interest to the viewer is identified from among the plurality of divided regions formed by dividing the display target image vertically. The attention area specifying part to
The said image generation part performs an emphasis process with respect to at least one part about the non-process area | regions other than the sky area | region which performs the said blurring process among the said attention areas, The any one of Claim 1 to 8 characterized by the above-mentioned. The image processing apparatus according to item 1. - 請求項1から9のいずれか1項に記載の画像処理装置を備えたことを特徴とする表示装置。 A display device comprising the image processing device according to any one of claims 1 to 9.
- 表示対象画像において空に相当する空領域を特定する空領域特定工程と、
上記空領域特定工程において特定された上記空領域の色に基づいて、当該空領域に対するぼかし処理を行うか否かを判定する実行可否判定工程と、
上記実行可否判定工程において上記ぼかし処理を行うと判定された場合に、上記空領域に対してぼかし処理を行うことにより処理後画像を生成する画像生成工程と、を含むことを特徴とする画像処理装置の制御方法。 An empty area specifying step for specifying an empty area corresponding to the sky in the display target image;
Based on the color of the sky area specified in the sky area specifying step, an execution determination process for determining whether to perform blurring processing on the sky area;
An image generation step of generating a post-processing image by performing a blurring process on the sky area when it is determined that the blurring process is performed in the execution determination process. Control method of the device. - 請求項1に記載の画像処理装置としてコンピュータを機能させるための制御プログラムであって、上記空領域特定部、上記実行可否判定部および上記画像生成部としてコンピュータを機能させるための制御プログラム。 A control program for causing a computer to function as the image processing apparatus according to claim 1, wherein the control program causes the computer to function as the empty area specifying unit, the execution determination unit, and the image generation unit.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-236910 | 2016-12-06 | ||
JP2016236910 | 2016-12-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018105228A1 true WO2018105228A1 (en) | 2018-06-14 |
Family
ID=62491868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/036609 WO2018105228A1 (en) | 2016-12-06 | 2017-10-10 | Image processing device, display device, image processing device control method, and control program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018105228A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140594A (en) * | 2004-11-10 | 2006-06-01 | Pentax Corp | Digital camera |
JP2012109725A (en) * | 2010-11-16 | 2012-06-07 | Canon Inc | Stereoscopic video processing device and stereoscopic video processing method |
JP2013254358A (en) * | 2012-06-07 | 2013-12-19 | Sony Corp | Image processing apparatus, image processing method, and program |
WO2014041860A1 (en) * | 2012-09-14 | 2014-03-20 | ソニー株式会社 | Image-processing device, image-processing method, and program |
-
2017
- 2017-10-10 WO PCT/JP2017/036609 patent/WO2018105228A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140594A (en) * | 2004-11-10 | 2006-06-01 | Pentax Corp | Digital camera |
JP2012109725A (en) * | 2010-11-16 | 2012-06-07 | Canon Inc | Stereoscopic video processing device and stereoscopic video processing method |
JP2013254358A (en) * | 2012-06-07 | 2013-12-19 | Sony Corp | Image processing apparatus, image processing method, and program |
WO2014041860A1 (en) * | 2012-09-14 | 2014-03-20 | ソニー株式会社 | Image-processing device, image-processing method, and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11501479B2 (en) | Virtual make-up apparatus and virtual make-up method | |
Kruijff et al. | Perceptual issues in augmented reality revisited | |
US8572501B2 (en) | Rendering graphical objects based on context | |
US20170011492A1 (en) | Gaze and saccade based graphical manipulation | |
CN110488969A (en) | Technology relative to actual physics object positioning virtual objects | |
KR20160021607A (en) | Method and device to display background image | |
US10636125B2 (en) | Image processing apparatus and method | |
US11545108B2 (en) | Modifying rendered image data based on ambient light from a physical environment | |
US20220094896A1 (en) | Selective colorization of thermal imaging | |
US11128909B2 (en) | Image processing method and device therefor | |
JP2016142988A (en) | Display device and display control program | |
US11657478B1 (en) | Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision | |
KR20240112853A (en) | Anchoring virtual content to physical surfaces | |
US20140301638A1 (en) | Color extraction-based image processing method, computer-readable storage medium storing the same, and digital image apparatus | |
WO2018105228A1 (en) | Image processing device, display device, image processing device control method, and control program | |
US20230217000A1 (en) | Multiview display system and method with adaptive background | |
WO2017221509A1 (en) | Image processing device, display device, image processing device control method, and control program | |
CN116416415A (en) | Augmented reality processing method and device | |
TWI541761B (en) | Image processing method and electronic device thereof | |
JP7172066B2 (en) | Information processing device and program | |
JP7125847B2 (en) | 3D model display device, 3D model display method and 3D model display program | |
KR20190043956A (en) | Method and apparatus for displaying augmented reality | |
KR20220157147A (en) | Method and apparatus for processing an image | |
US20220201239A1 (en) | Contrast enhanced images composited with artificial colorization | |
KR102214439B1 (en) | Image processing Method and apparatus for low-power mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17877399 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17877399 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |