WO2019107507A1 - Line-of-sight guiding system - Google Patents
Line-of-sight guiding system Download PDFInfo
- Publication number
- WO2019107507A1 WO2019107507A1 PCT/JP2018/044043 JP2018044043W WO2019107507A1 WO 2019107507 A1 WO2019107507 A1 WO 2019107507A1 JP 2018044043 W JP2018044043 W JP 2018044043W WO 2019107507 A1 WO2019107507 A1 WO 2019107507A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gaze
- line
- sight
- area
- guidance
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
Definitions
- the present invention relates to a gaze guidance system.
- this invention makes it a subject to provide the gaze guidance system which can perform natural gaze guidance, without giving a user a feeling of discomfort and stress.
- a gaze guidance system divides a visual target into a gaze guidance region and a gaze non-guidance region in which the user directs the gaze, and guides the gaze of the user from the gaze non-guidance region to the gaze guidance region.
- the filter processing unit is characterized in that bleeding processing with color misregistration is performed at least in the sight non-induction region.
- natural gaze guidance can be performed without giving a sense of discomfort or stress to the user.
- this is a graph summarizing the results of a questionnaire conducted on subjects of a CMY blur experiment, which is a graph in which negative responses are collected.
- it is a graph summarizing the results of questionnaires conducted to subjects of the experiment of B & W blur, and a graph in which positive responses are collected.
- this is a graph summarizing the results of a questionnaire conducted on subjects of a B & W blur experiment, which is a graph in which negative responses are collected.
- CMY blur by blur filter B & W blur
- blur blur by blur filter This is a graph summarizing the results of the questionnaire conducted to the subject of the experiment by the original image.
- CMY blur by Bleeding filter B & W blur
- B & W blur B & W blur
- blur blur by blur filter It is a gaze guidance system, and it is a flow chart explaining the order of experiment. It is an original figure before processing in the sentence which consists of the character with strong edge by the gaze guidance system.
- a gaze guidance system S1 shown in FIG. 1 mainly includes a control unit 1 configured by a PC 7 or the like, and a horizontally long monitor 2 as a monitor device as an output unit.
- the control unit 1 is provided with a filter processing unit 9, an area division unit 109, and an image generation unit 209. Further, the control unit 1 is connected to a gaze position detection unit 5 provided at the lower edge of the horizontally long monitor 2.
- the gaze position detection unit 5 includes a plurality of cameras 25 that detect the position of the gaze 4 a of the user 3.
- the camera 25 is connected to the control unit 1 so as to be able to output information of the detected line of sight.
- the gaze guidance system S1 of the embodiment uses an artificial color shift as a method of naturally emphasizing a specific area of an object having many edge information such as characters without impairing the readability of the original information. Use.
- FIG. 2 is a schematic diagram which shows the example of interpretation of character information using gaze guidance system S1 of embodiment.
- the horizontal monitor 2 constitutes a multi-window system having a pair of left and right screens 2a and 2b.
- the multi-window system may be configured by being connected to a pair of left and right monitors (not shown).
- the area dividing unit 109 (see FIG. 1) of the control unit 1 divides the visual target 10 in which the user 3 directs the gaze 4 a into the gaze guidance region 11 and the gaze non-guidance region 12.
- the line of sight non-induction area 12 exists around the line of sight guidance area 11 where the user 3 wants to direct the line of sight. That is, the visual line guidance area 11 is partially provided in a part of the visual line non-induction area 12 substantially on the entire surface.
- the filter processing unit 9 performs blur processing with color shift in the sight line non-induction region 12. Bleeding processing is performed on the gaze non-guiding area 12.
- the image generation unit 209 generates information of the image subjected to the filter processing so as to be outputable to the screens 2 a and 2 b of the horizontally long monitor 2.
- the image output from the control unit 1 is displayed on the screens 2a and 2b.
- the characters of the gaze guidance area 11 are displayed with high contrast and are emphasized by a strong, black and white clear display of edges. Therefore, the line of sight 4 a of the user 3 is guided from the line of sight non-induction area 12 to the line of sight guidance area 11.
- an English sentence (hereinafter also referred to as a text) is displayed on the left screen 2a, and a Japanese translated sentence (hereinafter also referred to as a translated sentence) corresponding to the right screen 2b is displayed.
- the user 3 reads a sentence while comparing the left and right screens 2a and 2b.
- the user 3 who is looking at the left screen 2a moves the line of sight 4a to the right screen 2b.
- the part (L in the figure) which was viewed last on the screen 2a on the left side is stored.
- the line of sight 4b of the user 3 returns to the screen 2a on the left side
- the line of sight non-induction area 14 other than the portion L stored earlier is filtered.
- the filtering of the line of sight non-induction region 14 may be gradually weakened.
- the location of the translation in the visual target 10 on the screen 2b on the right side corresponding to the sentence displayed at the place L on the left side of the screen 2a may be set as the elliptical line-of-sight guidance area 11 .
- the periphery of the line-of-sight induction area 11 in the visual target 10 may be set as the line-of-sight non-induction area 12 to blur the line-of-sight non-induction area 12 on the screen 2b.
- by limiting the bleeding process to such an extent that the translated text can be read it is possible to prevent the user 3 from being given stress caused by a sense of discomfort or discomfort.
- the line-of-sight guidance area 11 is relatively emphasized by the line-of-sight non-induction area 12 in which the bleeding is alleviated, and the line of sight 4a is naturally guided from the line-of-sight non-induction area 12 where the characters are blurred to the line-of-sight guidance area 11.
- the blur processing of the gaze non-induction area 12 is gradually weakened. Thereby, the stress of the user 3 can be reduced as in the case of the sight line non-induction area 14.
- the user naturally repeats the guidance of the gaze 4 b to the gaze guidance area 11 or the gaze guidance area 13. Then, by pre-reading the current line of sight in the text and filtering the translation, the line of sight can be efficiently applied to the relevant part. Also, even if the line of sight is returned to the text, the line of sight is naturally guided to the place where it was read. Therefore, efficient line-of-sight movement can be performed, and natural line-of-sight guidance can be performed without giving a sense of discomfort or stress to the user.
- [Bleeding filter] 3A shows an original image 20 of a character string
- FIG. 3B shows a blurred image 21 of a Gaussian blur filter as a blurred image
- FIG. 3C shows a CMY blur filter image
- FIG. 3D shows a black B & W blur
- An example of a filter image is shown.
- “blur” refers to a single color “plate misregistration” or multiple colors “color misregistration”, and the “color misregistration” phenomenon that occurs during projection of the projector is intentionally It is obtained by generating data to be displayed and reproducing it on an output monitor.
- the CMY blur filter uses three colors of Cyan (hereinafter also referred to as C component), Magenta (hereinafter also referred to as M component), and Yellow (hereinafter also referred to as Y component).
- C component Cyan
- M component Magenta
- Y component Yellow
- An image is obtained by shifting each by x (x: natural number) pixels.
- CMY bleeding is generated through the following four steps.
- Step S1 An image is generated as a result of shifting the characters in the original image 20 shown in FIG. 4A in three different directions by a designated pixel. Then, it is colored in three colors of C component, M component and Y component respectively. For this reason, as shown in FIG. 4B, an image as a result of shifting the original image in the left, right, and downward directions by a designated pixel is generated one by one.
- Step S2 An alpha channel is added to the original image 20 in order to synthesize an offset image using an alpha blend.
- Alpha blending is to combine two images by coefficients ( ⁇ value).
- the alpha channel is to prepare an image which defines a portion to be transmitted, which is called a mask image at the time of composition, and performs transmission of the image to be transmitted based on it.
- the information may be given to pixels and transmitted, and the information is called an alpha channel.
- Step S3 Each image is colored and integrated to each of Cyan, Magenta, and Yellow.
- the RGB values of the image f are expressed as f (R), f (G) and f (B), and each value takes an integer value of 0 to 255.
- the integration source image (white image) is Base, the image shifted in each of the three directions is cmy, the original image is original, and the result image is a result.
- each cmy and original are subtracted from Base in accordance with the color for each pixel and the transparency ⁇ of the alpha blend to obtain each value of result.
- adjustment of the RGB values of cmy is performed. For example, when coloring cmy to Cyan, the values of cmy (G) and cmy (B) are both 25. Set to 5 As a result, since only the cmy (R) value is subtracted from the Base (R), the cyan color is reproduced. The same adjustment is made for other Magenta and Yellow.
- Equation 1 is achieved without adjustment of cmy's RGB values.
- Step S4 The original character string position is reproduced using the original image.
- the strength of the blur filter can be controlled by the opacity ⁇ of the following equation 3.
- the device of the comparative experiment is equivalent to the gaze guidance system S1 of FIG. 1 plus a jaw rest 27 shown by a two-dot chain line.
- the output monitor 24 has 25 inches and a resolution of 1920 ⁇ 1080 pixels.
- the lower edge of the output monitor 24 is provided with a plurality of cameras 25 of a gaze position detection unit 5 that detects the gaze of the subject, and is connected to the control unit 1 in the PC 7 below the desk 26.
- a jaw rest 27 is provided which keeps the distance between the output monitor 24 and the head of the subject seated on the chair 28 constant.
- a gaze measuring device is provided at the lower part of the output monitor 24.
- the room in which the device was placed was in a state of lighting and there was no human in the range where the subject's line of sight could be reached.
- the size of the character was 18 pt, the font used Times New Roman Regular.
- FIG. 5 is a diagram showing the flow of comparative experiments.
- the subject 3 reads aloud the sentence displayed on the output monitor 24.
- the reading start point is clearly indicated in the text by the red circle only for the first time. Start reading from that point and press the space key when a period comes.
- the screen of the output monitor 24 is darkened for 3 seconds, during which a white cross 29 is displayed at any of the four corners on the screen, and the subject is the white cross Look at 29
- the darkening is canceled and the sentence is displayed again.
- the breakdown of all 10 trials changed the type and strength of the filter and made 3 trials, 1 trial of the blur filter, 3 trials of 2 types of the blur filter (CMY, B & W) and changed the blur size .
- the filter effect was used to emphasize the location when the subject pressed the space key, turned back to the reading resumption point after 3 seconds of darkening.
- the subject's line of sight was measured, and when the subject's line of sight returned to the point where the reading was resumed, the highlighting was gradually released.
- the change in this case is that the image processed into 20 levels of strength in advance according to each filter is displayed for 0.030 to 0.035 seconds per sheet, and the total change is 0.600 to 0.700 seconds. did.
- the subject is informed in advance that there is a situation in which the original original without filtering is displayed.
- Table 1 is an example of the questionnaire result performed to the test subject of the experiment which tests the effect of a gaze guidance system.
- Four questions were prepared, and each question was divided into five grades from 1 to 5.
- it is the contents asking whether question 1 and question 3 are good impressions, and the question is asking whether question 2 and question 4 are bad impressions.
- a gaze guidance area is generated with a radius of 118 pixels or less, preferably 30 pixels.
- FIG. 6 shows the result of the questionnaire of the magnitude of the bleeding of each of the CMY bleeding filters.
- the CMY blur filter is most easy to resume reading when using 1.5 pixel. It is easy to continue reading aloud.
- processing of the image is also felt strongly.
- the CMY blur when the blur size is 0.5 pixel is treated as a representative of the entire CMY blur. That is, hereinafter, CMY bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
- FIGS. 7A to 7D show the results of questionnaires of the magnitudes of the B & W bleeding filters. Because of the difference in B & W blur size (0.5 pixels, 1.0 pixels, 1.5 pixels), B & W blur filters are most easy to resume reading and continue reading when 1.5 pixels and 0.5 pixels are used. However, the B & W blur filter resulted in the 1.5-pixel case being the hardest to read the document of the three. Also, the processing strength gives strong impression in the order of 1.0 pixel, 1.5 pixel and 0.5 pixel. From the above results, B & W bleeding when the size of B & W bleeding is 0.5 pixel is treated as a representative of the entire B & W bleeding. That is, hereinafter, B & W bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
- FIGS. 8A to 8B show the results of the questionnaire performed on the original image, with CMY blur by the blur filter, B & W blur, blur blur by the blur filter among the effects of the line-of-sight guidance system S1.
- B & W is the best result for the readability of the document and the readability of the reading point.
- B & W it is B & W that the image processing is most strongly felt, and it can be seen that there is a trade-off between the degree of natural feeling and the readability of sentences and the ease of finding restart points.
- CMY bleeding With regard to CMY bleeding, the readability and reusability of finding the reopening point are lower than those of the other filters, but the score is better than the trial of the original image without processing. In addition, CMY bleeding did not strongly sense the image processing in all the filters, and did not feel in the way of image processing when reading a sentence. Therefore, if priority is given to readability in a situation that the user may notice, B & W bleeding is suitable, and CMY bleeding is suitable if aiming to achieve natural gaze guidance.
- Blur blur also tends to be similar to B & W blur. Blur blur is not felt stronger than B & W smearing, but it gives a strong impression of processing compared to CMY blur. The degree of feeling in the way of image processing was the same as B & W bleeding.
- FIG. 9 is a flowchart for explaining the order of experiments in the gaze guidance system S1 of the embodiment.
- the filter processing unit 9 of the gaze guidance system S1 is an ROI (abbreviation of Region of Interests: region of interest; Apply filtering other than just the ROI).
- the gaze position detection unit 5 detects the gaze of the user 3.
- the control unit 1 determines whether the line of sight of the user 3 has entered the ROI.
- step S3 determines that the line of sight of the user 3 has entered the ROI (YES in step S3), the process proceeds to the next step S4, and if it is determined that the line of sight of the user 3 does not enter the ROI (step S4) (NO at S3) returns the process to step S2.
- step S4 the filter processing unit 9 gradually cancels the filtering (filtering process).
- step S5 the gaze position detection unit 5 detects the gaze of the user 3.
- step S6 it is determined by the gaze position detection unit 5 whether or not the gaze of the user 3 has moved to another process (window).
- step S6 If it is determined that the line of sight of user 3 has moved to another process (YES in step S6), the process proceeds to the next step S7, and the line of sight of user 3 has not moved to another process (NO in step S6) If it is determined that the condition is, the process returns to step S5.
- step S7 the area around the last fixation point is set to the ROI of this process, and the process proceeds to another process. It is assumed that, for each window, the ROI at the time when the line of sight left the window last time is held. That is, when the line of sight leaves, the last fixation point is left as the ROI of the window of the line-of-sight guidance area. Thus, when the line of sight returns to the window, it is possible to start blurring (blurring) other than the previously recorded ROI. Therefore, when performing work or the like while reciprocating a plurality of screens 2a and 2b and work areas on a plurality of screens such as a multi-window system, natural visual line guidance can be performed continuously.
- step S8 it is determined whether the process has ended. If the process is not completed (NO in step S8), the process returns to step S1 to repeat the process. When the process ends (YES in step S8), the control unit 1 ends the series of processes (END).
- FIG. 10A to 10C illustrate images of a document used for the gaze guidance system S1 of the embodiment.
- FIG. 10A is an original original drawing 101 before processing.
- it is a text composed of characters having a large edge (border) contrast and strong edges as compared with a pictorial diagram (see FIG. 11) in which the contrast is lowered due to a neutral color or gradation.
- FIG. 10B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment.
- the black characters are subjected to blurring processing with color deviation.
- FIG. 10C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example. Referring to FIGS. 10B and 10C, it appears that the gaze guidance effect is in the gaze guidance area 211. However, in the case of FIG. 10C of the comparative example, if it is attempted to obtain a sufficient gaze guidance effect, the filter of the gaze non-guiding region 212 obfuscates characters. On the other hand, in FIG. 10B of the embodiment, the blur processing by the CMY blur filter is performed, so the sense of incongruity is not felt and the inside of the line of sight non-induction region 112 is not conscious of the filter being applied. You can keep reading the letters.
- FIG. 11A to 11D illustrate images of pictures (photographs) used in the gaze guidance system S1 of the embodiment.
- FIG. 11A is an original original drawing 201 before processing.
- the edge is weak due to a neutral color or gradation.
- FIG. 11B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment.
- the peripheral line of sight non-induction region 312 is subjected to bleeding processing with color shift.
- FIG. 11A is an original original drawing 201 before processing.
- the edge is weak due to a neutral color or gradation.
- FIG. 11B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment.
- the peripheral line of sight non-induction region 312 is subjected to bleeding processing with color shift.
- 11C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example.
- the gaze guidance area 311 and the gaze non-guiding area 312 can be easily distinguished, and the gaze guidance effect can be obtained.
- the blurring process using the Gaussian blur filter is likely to obtain the line-of-sight guidance effect.
- FIG. 12 is a schematic view showing an example of application of the gaze guidance system S1 of the present invention, which is applied to an electronic market.
- the screen of the electronic advertisement medium 500 is divided into a gaze guidance area 511 and a gaze non-guiding area 512.
- FIG. 13 shows an example of application of the visual line guidance system S1 of the present invention, and is a schematic view showing application to digital signage.
- the screen of the digital signage 600 is divided into a gaze guidance area 611 and a gaze non-guidance area 612.
- FIG. 14 is a schematic view showing an example of application of the visual line guidance system S1 of the present invention, which is applied to information visualization.
- the screen of the 3D display 700 on the three-dimensionalized screen is divided into a gaze guidance area 711 and a gaze non-guidance area 712.
- FIG. 15 is a schematic view showing an example of application of the visual line guidance system S1 of the present invention, which is applied to an electronic book.
- the screen of the electronic book 800 is divided into a gaze guidance area 811 and a gaze non-guidance area 812.
- FIG. 16 and FIG. 17 show an example of applying the gaze guidance system S2 of the modification, and are a schematic diagram and a block diagram showing application to an actual show window 900.
- This gaze guidance system S2 is replaced by the screens 2a and 2b configured by the horizontally long monitor 2 of the gaze guidance system S1 of the embodiment, and is output to the output unit 102 that projects data generated by the PC7.
- An external projector 8 is provided as a projector device.
- the portion where the output unit 102 is provided may be installed anywhere inside or outside the actual show window 900, and may be reflected or transmitted by a mirror, glass or the like.
- the external projector 8 distinguishes and projects the gaze guidance area 911 and the gaze non-guiding area 912 when projecting the shadow image on the articles 901 to 902 displayed in the actual show window 900.
- the external projector 8 is used to add subtleties such as blur to actual products.
- a finely filtered image such as blur is projected.
- the article 901 and the article 902 change the appearance of the real object according to the projected image.
- the unprojected article 903 in the line of sight guidance area 911 floats, and can naturally guide the line of sight so that the line of sight of a person is poured on the article 903.
- the modification shown in FIGS. 12 to 17 is the same as or equivalent to those of the embodiment in the configuration and effects other than those specifically described, and thus the description thereof is omitted.
- CMY blur filter using an artificial color shift
- the use of a CMY blur filter has been shown and described, but the present invention is not particularly limited thereto.
- an RGB filter or a black and white B & W blur filter may be used, and the number, color scheme, and combination of blurs may be any as long as the blur processing with color shift is performed. .
- the directions of shifting the respective colors and the shifting amounts are not particularly limited. For example, it is not limited to the ones which are shifted at an equal angle radially, but may be ones which are not at equal angles but in other directions, and the x pixels which are shifted may be different for each color.
- a blur image of a blur filter and a CMY blur filter image may be mixed in one screen. Further, the transparency of the color may be changed for each color element and displayed with a blur according to the application.
- control part 1 is divided and described in the filter processing part 9, the area division part 109, and the image generation part 209 in the embodiment, the present invention is not limited to this, and the processing is integrated in the control part 1. It may be done. Furthermore, the control unit 1 is not limited to a single unit, and a plurality of processing units such as four or eight units may be combined, and the processing capability, quantity, type, etc. of the control unit 1 and the internal processing units Is not particularly limited.
- CMY blur filter a black and white blur B & W blur filter, or a Gaussian blur filter may be made to correspond to part or all of the character part and the picture part, respectively. That is, a plurality of filter processing units 9 for performing different types of filter processing may be provided, and different filter processing may be performed for each area divided by the area dividing unit 109, and the type and number of filters are particularly limited. It is not a thing.
- the filter image is projected on the articles 901 to 902 displayed in the actual show window 900 using the external projector 8.
- the present invention is not limited to this, and it may be a real object. There are no particular limitations on the shape, number, and material of what is projected.
- control unit 2 horizontal monitor (output unit) 2a, 2b Screen 3 Users 4a to 4c Line of sight 5 Line of sight position detection unit 8
- External projector (output unit) 9 filter processing unit 10 visual object 11, 111 to 911 gaze guidance region 12, 112 to 912 gaze non induction region 900 real show window 901 to 903 articles S1, S2 gaze guidance system
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Processing (AREA)
Abstract
A control unit (1) in a line-of-sight guiding system (S1) includes a filter processing unit (9) that divides a visual target (10) to which the line of sight (4a), (4b) of a user (3) is directed into a line-of-sight guidance region (11) and a line-of-sight non-guidance region (12), and guides the line-of-sight (4a) of the user (3) from the line-of-sight non-guidance region (12) toward the line-of-sight guidance region (11). A landscape monitor (2) displays an image obtained on the basis of generated data that was subjected to filter processing by the filter processing unit (9). The filter processing unit (9) performs blur processing associated with color shifting on at least the line-of-sight non-guidance region (12).
Description
本発明は、視線誘導システムに関する。
The present invention relates to a gaze guidance system.
従来、画像処理の分野において、フィルタ処理等のぼかしを用いた画像処理システムが研究されている(例えば、特許文献1~3等参照)。
Conventionally, in the field of image processing, image processing systems using blurring such as filter processing have been studied (see, for example, Patent Documents 1 to 3).
近年、WEBやデジタル・サイネージが広く普及し、その媒体上に表示するコンテンツを効果的に印象付けるための工夫が多くなされている。その工夫の一つとして画像における意図する部分へユーザの視線を誘導する試みがある。
しかしながら、画面の点滅や部分的なアニメーションなどの露骨な視線誘導は、ユーザに不快な印象を与える。また、不快にならない様に自然な視線誘導に重きをおくと、視線誘導の効果が薄くなる。
たとえば、従来の研究では、写真(絵図)のぼかし効果による視線誘導が行われている。
このぼかし効果による視線誘導の手法を、文章中の文字情報といった輪郭のはっきりしたエッジ情報の多い表示に対する視線誘導に適用すると、ユーザは、周囲がぼかされていることを認識してしまう。また、ぼかす程度を上げると周囲がぼやけて文字の可読性が損なわれてしまう。このため、視線誘導と可読性とを両立させることは困難であるといった問題があった。 In recent years, WEB and digital signage have been widely spread, and many ideas have been made to effectively impress contents displayed on the medium. As one of the devices, there is an attempt to guide the user's gaze to an intended part in an image.
However, explicit line-of-sight guidance, such as screen blinking or partial animation, gives the user an unpleasant impression. In addition, if the emphasis is on natural visual guidance so as not to be uncomfortable, the effect of visual guidance becomes thin.
For example, in conventional research, gaze guidance is performed by the blurring effect of a picture (picture).
When the gaze guidance method by the blurring effect is applied to gaze guidance for a display with a large amount of edge information such as character information in text, the user recognizes that the surroundings are blurred. In addition, if the degree of blurring is increased, the surroundings are blurred and the readability of characters is impaired. For this reason, there was a problem that it was difficult to make eye gaze guidance and readability compatible.
しかしながら、画面の点滅や部分的なアニメーションなどの露骨な視線誘導は、ユーザに不快な印象を与える。また、不快にならない様に自然な視線誘導に重きをおくと、視線誘導の効果が薄くなる。
たとえば、従来の研究では、写真(絵図)のぼかし効果による視線誘導が行われている。
このぼかし効果による視線誘導の手法を、文章中の文字情報といった輪郭のはっきりしたエッジ情報の多い表示に対する視線誘導に適用すると、ユーザは、周囲がぼかされていることを認識してしまう。また、ぼかす程度を上げると周囲がぼやけて文字の可読性が損なわれてしまう。このため、視線誘導と可読性とを両立させることは困難であるといった問題があった。 In recent years, WEB and digital signage have been widely spread, and many ideas have been made to effectively impress contents displayed on the medium. As one of the devices, there is an attempt to guide the user's gaze to an intended part in an image.
However, explicit line-of-sight guidance, such as screen blinking or partial animation, gives the user an unpleasant impression. In addition, if the emphasis is on natural visual guidance so as not to be uncomfortable, the effect of visual guidance becomes thin.
For example, in conventional research, gaze guidance is performed by the blurring effect of a picture (picture).
When the gaze guidance method by the blurring effect is applied to gaze guidance for a display with a large amount of edge information such as character information in text, the user recognizes that the surroundings are blurred. In addition, if the degree of blurring is increased, the surroundings are blurred and the readability of characters is impaired. For this reason, there was a problem that it was difficult to make eye gaze guidance and readability compatible.
そこで、本発明は、ユーザに不快感やストレスを与えることなく自然な視線誘導を行える視線誘導システムを提供することを課題としている。
Then, this invention makes it a subject to provide the gaze guidance system which can perform natural gaze guidance, without giving a user a feeling of discomfort and stress.
本発明に係る視線誘導システムは、目視対象を、ユーザが視線を向ける視線誘導領域と視線非誘導領域とに区分して、視線非誘導領域から視線誘導領域へユーザの視線を誘導するフィルタ処理部を備え、フィルタ処理部は、少なくとも視線非誘導領域に、色ずれを伴うにじみ処理を行うことを特徴としている。
A gaze guidance system according to the present invention divides a visual target into a gaze guidance region and a gaze non-guidance region in which the user directs the gaze, and guides the gaze of the user from the gaze non-guidance region to the gaze guidance region. , And the filter processing unit is characterized in that bleeding processing with color misregistration is performed at least in the sight non-induction region.
本発明によれば、ユーザに不快感やストレスを与えることなく自然な視線誘導を行える。
According to the present invention, natural gaze guidance can be performed without giving a sense of discomfort or stress to the user.
本発明の実施形態について、図面を参照して詳細に示す。説明において、同一の要素には同一の番号を付し、重複する説明は省略する。
Embodiments of the invention are illustrated in detail with reference to the drawings. In the description, the same elements will be denoted by the same reference numerals and redundant description will be omitted.
図1に示す視線誘導システムS1は、PC7等により構成された制御部1と、出力部としてのモニタ装置である横長モニタ2とを主に備えている。
制御部1には、フィルタ処理部9、領域区分部109、画像生成部209が設けられている。また、制御部1は、横長モニタ2の下縁に設けられた視線位置検出部5に接続されている。
視線位置検出部5は、ユーザ3の視線4aの位置を検出する複数台のカメラ25を有している。カメラ25は、検出した視線の情報を制御部1に出力可能に接続されている。
実施形態の視線誘導システムS1は、本来の情報の可読性を損なわずに文字のようなエッジ情報の多い対象の特定領域を自然に強調する手法として、人工的な色ずれを利用した、にじみフィルタを用いる。 A gaze guidance system S1 shown in FIG. 1 mainly includes acontrol unit 1 configured by a PC 7 or the like, and a horizontally long monitor 2 as a monitor device as an output unit.
Thecontrol unit 1 is provided with a filter processing unit 9, an area division unit 109, and an image generation unit 209. Further, the control unit 1 is connected to a gaze position detection unit 5 provided at the lower edge of the horizontally long monitor 2.
The gazeposition detection unit 5 includes a plurality of cameras 25 that detect the position of the gaze 4 a of the user 3. The camera 25 is connected to the control unit 1 so as to be able to output information of the detected line of sight.
The gaze guidance system S1 of the embodiment uses an artificial color shift as a method of naturally emphasizing a specific area of an object having many edge information such as characters without impairing the readability of the original information. Use.
制御部1には、フィルタ処理部9、領域区分部109、画像生成部209が設けられている。また、制御部1は、横長モニタ2の下縁に設けられた視線位置検出部5に接続されている。
視線位置検出部5は、ユーザ3の視線4aの位置を検出する複数台のカメラ25を有している。カメラ25は、検出した視線の情報を制御部1に出力可能に接続されている。
実施形態の視線誘導システムS1は、本来の情報の可読性を損なわずに文字のようなエッジ情報の多い対象の特定領域を自然に強調する手法として、人工的な色ずれを利用した、にじみフィルタを用いる。 A gaze guidance system S1 shown in FIG. 1 mainly includes a
The
The gaze
The gaze guidance system S1 of the embodiment uses an artificial color shift as a method of naturally emphasizing a specific area of an object having many edge information such as characters without impairing the readability of the original information. Use.
図2は、実施形態の視線誘導システムS1を利用した文字情報の判読例を示す模式図である。横長モニタ2は、左,右一対の画面2a,2bを有するマルチウインドウシステムを構成している。または、図示しない左,右一対の二台のモニタに接続してマルチウインドウシステムを構成しているものでもよい。
FIG. 2: is a schematic diagram which shows the example of interpretation of character information using gaze guidance system S1 of embodiment. The horizontal monitor 2 constitutes a multi-window system having a pair of left and right screens 2a and 2b. Alternatively, the multi-window system may be configured by being connected to a pair of left and right monitors (not shown).
すなわち、制御部1の領域区分部109(図1参照)は、ユーザ3が視線4aを向ける目視対象10を、視線誘導領域11と視線非誘導領域12とに区分する。
ここでは、ユーザ3に視線を向けてもらいたい視線誘導領域11の周囲に、視線非誘導領域12が存在する。つまり、ほぼ全面の視線非誘導領域12の一部に、部分的に視線誘導領域11が設けられている。 That is, the area dividing unit 109 (see FIG. 1) of thecontrol unit 1 divides the visual target 10 in which the user 3 directs the gaze 4 a into the gaze guidance region 11 and the gaze non-guidance region 12.
Here, the line of sightnon-induction area 12 exists around the line of sight guidance area 11 where the user 3 wants to direct the line of sight. That is, the visual line guidance area 11 is partially provided in a part of the visual line non-induction area 12 substantially on the entire surface.
ここでは、ユーザ3に視線を向けてもらいたい視線誘導領域11の周囲に、視線非誘導領域12が存在する。つまり、ほぼ全面の視線非誘導領域12の一部に、部分的に視線誘導領域11が設けられている。 That is, the area dividing unit 109 (see FIG. 1) of the
Here, the line of sight
フィルタ処理部9は、視線非誘導領域12に、色ずれを伴うにじみ処理を行う。視線非誘導領域12に、にじみ処理を施す。
画像生成部209は、フィルタ処理された画像の情報を横長モニタ2の画面2a,2bに出力可能に生成する。
制御部1から出力された画像は、画面2a,2bに表示される。画面に表示された画像は、相対的に視線誘導領域11の文字が高いコントラストで表示されてエッジの強い、白黒のはっきりとした表示で強調される。
このため、ユーザ3の視線4aは、視線非誘導領域12から視線誘導領域11へ誘導される。 Thefilter processing unit 9 performs blur processing with color shift in the sight line non-induction region 12. Bleeding processing is performed on the gaze non-guiding area 12.
Theimage generation unit 209 generates information of the image subjected to the filter processing so as to be outputable to the screens 2 a and 2 b of the horizontally long monitor 2.
The image output from thecontrol unit 1 is displayed on the screens 2a and 2b. In the image displayed on the screen, the characters of the gaze guidance area 11 are displayed with high contrast and are emphasized by a strong, black and white clear display of edges.
Therefore, the line ofsight 4 a of the user 3 is guided from the line of sight non-induction area 12 to the line of sight guidance area 11.
画像生成部209は、フィルタ処理された画像の情報を横長モニタ2の画面2a,2bに出力可能に生成する。
制御部1から出力された画像は、画面2a,2bに表示される。画面に表示された画像は、相対的に視線誘導領域11の文字が高いコントラストで表示されてエッジの強い、白黒のはっきりとした表示で強調される。
このため、ユーザ3の視線4aは、視線非誘導領域12から視線誘導領域11へ誘導される。 The
The
The image output from the
Therefore, the line of
たとえば、図2Aに示すように、左側の画面2aに英文(以下、本文とも記す)を表示して、右側の画面2bに対応する日本語訳文(以下、訳文とも記す)を表示する。ユーザ3は、左,右の画面2a,2bを見比べながら、文章を読み進める。
For example, as shown in FIG. 2A, an English sentence (hereinafter also referred to as a text) is displayed on the left screen 2a, and a Japanese translated sentence (hereinafter also referred to as a translated sentence) corresponding to the right screen 2b is displayed. The user 3 reads a sentence while comparing the left and right screens 2a and 2b.
左側の画面2aを見ていたユーザ3が右側の画面2bに視線4aを移動する。この際、左側の画面2aで最後に見ていた部分(図のL)を記憶する。
図2Bでは、ユーザ3の視線4bが左側の画面2aに戻ってくると、先程記憶したLの部分以外の視線非誘導領域14をフィルタリングする。
図2Cでは、そして、視線がLに戻ったことを確認したら、視線非誘導領域14のフィルタリングを徐々に弱めても良い。 Theuser 3 who is looking at the left screen 2a moves the line of sight 4a to the right screen 2b. At this time, the part (L in the figure) which was viewed last on the screen 2a on the left side is stored.
In FIG. 2B, when the line of sight 4b of theuser 3 returns to the screen 2a on the left side, the line of sight non-induction area 14 other than the portion L stored earlier is filtered.
In FIG. 2C, and once it is confirmed that the line of sight has returned to L, the filtering of the line ofsight non-induction region 14 may be gradually weakened.
図2Bでは、ユーザ3の視線4bが左側の画面2aに戻ってくると、先程記憶したLの部分以外の視線非誘導領域14をフィルタリングする。
図2Cでは、そして、視線がLに戻ったことを確認したら、視線非誘導領域14のフィルタリングを徐々に弱めても良い。 The
In FIG. 2B, when the line of sight 4b of the
In FIG. 2C, and once it is confirmed that the line of sight has returned to L, the filtering of the line of
なお、左側の画面2aを見ていた場所Lに表示されている文章と対応する右側の画面2b上の目視対象10内の訳文の場所を、楕円形の視線誘導領域11と設定してもよい。これととともに、目視対象10内の視線誘導領域11の周囲を視線非誘導領域12として設定して、この画面2b上の視線非誘導領域12をにじみ処理してもよい。
そして、この際、にじみ処理を、訳文が判読可能な程度に留めることにより、ユーザ3に違和感や不快感から生じるストレスが与えられることを防ぐことができる。
にじみの緩和された視線非誘導領域12により、相対的に視線誘導領域11が強調されて、文字がにじんでいる視線非誘導領域12から視線誘導領域11へ自然に視線4aが誘導される。 The location of the translation in thevisual target 10 on the screen 2b on the right side corresponding to the sentence displayed at the place L on the left side of the screen 2a may be set as the elliptical line-of-sight guidance area 11 . Along with this, the periphery of the line-of-sight induction area 11 in the visual target 10 may be set as the line-of-sight non-induction area 12 to blur the line-of-sight non-induction area 12 on the screen 2b.
And, at this time, by limiting the bleeding process to such an extent that the translated text can be read, it is possible to prevent theuser 3 from being given stress caused by a sense of discomfort or discomfort.
The line-of-sight guidance area 11 is relatively emphasized by the line-of-sight non-induction area 12 in which the bleeding is alleviated, and the line of sight 4a is naturally guided from the line-of-sight non-induction area 12 where the characters are blurred to the line-of-sight guidance area 11.
そして、この際、にじみ処理を、訳文が判読可能な程度に留めることにより、ユーザ3に違和感や不快感から生じるストレスが与えられることを防ぐことができる。
にじみの緩和された視線非誘導領域12により、相対的に視線誘導領域11が強調されて、文字がにじんでいる視線非誘導領域12から視線誘導領域11へ自然に視線4aが誘導される。 The location of the translation in the
And, at this time, by limiting the bleeding process to such an extent that the translated text can be read, it is possible to prevent the
The line-of-
ユーザの視線を誘導することに成功した場合は、視線非誘導領域12のにじみ処理を除々に弱める。これにより、視線非誘導領域14の場合と同様に、ユーザ3のストレスを減少させることができる。
ユーザは、視線誘導領域11または視線誘導領域13へ自然に視線4bの誘導が繰り返される。
そして、本文中の現在の視線を先読みして、訳文にフィルタリングをかけることにより、効率良く該当箇所に視線を注げる。また、視線を本文に戻しても、読んでいた箇所に自然に視線が誘導される。このため、効率的な視線移動が可能となり、ユーザに不快感やストレスを与えることなく自然な視線誘導を行える。 If the user's gaze is successfully guided, the blur processing of thegaze non-induction area 12 is gradually weakened. Thereby, the stress of the user 3 can be reduced as in the case of the sight line non-induction area 14.
The user naturally repeats the guidance of the gaze 4 b to thegaze guidance area 11 or the gaze guidance area 13.
Then, by pre-reading the current line of sight in the text and filtering the translation, the line of sight can be efficiently applied to the relevant part. Also, even if the line of sight is returned to the text, the line of sight is naturally guided to the place where it was read. Therefore, efficient line-of-sight movement can be performed, and natural line-of-sight guidance can be performed without giving a sense of discomfort or stress to the user.
ユーザは、視線誘導領域11または視線誘導領域13へ自然に視線4bの誘導が繰り返される。
そして、本文中の現在の視線を先読みして、訳文にフィルタリングをかけることにより、効率良く該当箇所に視線を注げる。また、視線を本文に戻しても、読んでいた箇所に自然に視線が誘導される。このため、効率的な視線移動が可能となり、ユーザに不快感やストレスを与えることなく自然な視線誘導を行える。 If the user's gaze is successfully guided, the blur processing of the
The user naturally repeats the guidance of the gaze 4 b to the
Then, by pre-reading the current line of sight in the text and filtering the translation, the line of sight can be efficiently applied to the relevant part. Also, even if the line of sight is returned to the text, the line of sight is naturally guided to the place where it was read. Therefore, efficient line-of-sight movement can be performed, and natural line-of-sight guidance can be performed without giving a sense of discomfort or stress to the user.
視線誘導領域11などの目視対象を強調する際に、相対的に周囲の視線非誘導領域12をぼかしフィルタまたはにじみフィルタなどを用いて、ぼかしたり、にじませたりする加工を行う。
しかしながら、目視対象を強調する際に、フィルタの加工の程度が強いと、ユーザ3が違和感を覚えるという。
そこで、ユーザ3が不自然に感じない各フィルタの強弱の程度を調査することとした。 When emphasizing a visual target such as the line-of-sight guidance area 11, a process of blurring or blurring the surrounding non-line-of-sight guidance area 12 relatively using a blurring filter or a blurring filter is performed.
However, when emphasizing the visual target, if the degree of processing of the filter is strong, theuser 3 feels uncomfortable.
Therefore, it was decided to investigate the degree of strength of each filter that theuser 3 does not feel unnaturally.
しかしながら、目視対象を強調する際に、フィルタの加工の程度が強いと、ユーザ3が違和感を覚えるという。
そこで、ユーザ3が不自然に感じない各フィルタの強弱の程度を調査することとした。 When emphasizing a visual target such as the line-of-
However, when emphasizing the visual target, if the degree of processing of the filter is strong, the
Therefore, it was decided to investigate the degree of strength of each filter that the
[にじみフィルタ]
図3Aには、文字列のオリジナル画像20、図3Bには、ぼかし画像としてのガウシアンブラーフィルタのブラー画像21、図3Cには、CMYにじみフィルタ画像、図3Dには、黒一色でにじむB&Wにじみフィルタ画像の一例が示されている。
ここで、「にじみ」とは、単数色の「版ずれ」あるいは、多数色の「色ずれ」のことであり、プロジェクタの投影時に生じる「色ずれ」の現象を意図的にフィルタ処理部9で表示されるデータに生成して、出力モニタ上で再現することにより得られる。 [Bleeding filter]
3A shows anoriginal image 20 of a character string, FIG. 3B shows a blurred image 21 of a Gaussian blur filter as a blurred image, FIG. 3C shows a CMY blur filter image, and FIG. 3D shows a black B & W blur An example of a filter image is shown.
Here, “blur” refers to a single color “plate misregistration” or multiple colors “color misregistration”, and the “color misregistration” phenomenon that occurs during projection of the projector is intentionally It is obtained by generating data to be displayed and reproducing it on an output monitor.
図3Aには、文字列のオリジナル画像20、図3Bには、ぼかし画像としてのガウシアンブラーフィルタのブラー画像21、図3Cには、CMYにじみフィルタ画像、図3Dには、黒一色でにじむB&Wにじみフィルタ画像の一例が示されている。
ここで、「にじみ」とは、単数色の「版ずれ」あるいは、多数色の「色ずれ」のことであり、プロジェクタの投影時に生じる「色ずれ」の現象を意図的にフィルタ処理部9で表示されるデータに生成して、出力モニタ上で再現することにより得られる。 [Bleeding filter]
3A shows an
Here, “blur” refers to a single color “plate misregistration” or multiple colors “color misregistration”, and the “color misregistration” phenomenon that occurs during projection of the projector is intentionally It is obtained by generating data to be displayed and reproducing it on an output monitor.
このうち、図4に示すように、CMYにじみフィルタは、Cyan(以下、C成分とも記
す),Mazenta(以下、M成分とも記す),Yellow(以下、Y成分とも記す)の三色を
用いて、それぞれx(x:自然数)ピクセルずらすことにより、画像として得られる。
具体的には、CMYにじみは、次の4ステップを経て生成される。
ステップS1:図4Aに示すオリジナル画像20内の文字を異なる3方向へ指定ピクセル分だけずらした結果の画像を生成する。
そして、C成分,M成分,Y成分の三色にそれぞれ着色する。このため、図4Bに示すように、オリジナル画像を左,右,下方向へ指定ピクセル分だけずらした結果の画像を1枚づつ生成する。 Among them, as shown in FIG. 4, the CMY blur filter uses three colors of Cyan (hereinafter also referred to as C component), Magenta (hereinafter also referred to as M component), and Yellow (hereinafter also referred to as Y component). An image is obtained by shifting each by x (x: natural number) pixels.
Specifically, CMY bleeding is generated through the following four steps.
Step S1: An image is generated as a result of shifting the characters in theoriginal image 20 shown in FIG. 4A in three different directions by a designated pixel.
Then, it is colored in three colors of C component, M component and Y component respectively. For this reason, as shown in FIG. 4B, an image as a result of shifting the original image in the left, right, and downward directions by a designated pixel is generated one by one.
す),Mazenta(以下、M成分とも記す),Yellow(以下、Y成分とも記す)の三色を
用いて、それぞれx(x:自然数)ピクセルずらすことにより、画像として得られる。
具体的には、CMYにじみは、次の4ステップを経て生成される。
ステップS1:図4Aに示すオリジナル画像20内の文字を異なる3方向へ指定ピクセル分だけずらした結果の画像を生成する。
そして、C成分,M成分,Y成分の三色にそれぞれ着色する。このため、図4Bに示すように、オリジナル画像を左,右,下方向へ指定ピクセル分だけずらした結果の画像を1枚づつ生成する。 Among them, as shown in FIG. 4, the CMY blur filter uses three colors of Cyan (hereinafter also referred to as C component), Magenta (hereinafter also referred to as M component), and Yellow (hereinafter also referred to as Y component). An image is obtained by shifting each by x (x: natural number) pixels.
Specifically, CMY bleeding is generated through the following four steps.
Step S1: An image is generated as a result of shifting the characters in the
Then, it is colored in three colors of C component, M component and Y component respectively. For this reason, as shown in FIG. 4B, an image as a result of shifting the original image in the left, right, and downward directions by a designated pixel is generated one by one.
ステップS2:オリジナル画像20に対し、アルファブレンドを用いてずらした画像を合成するため、アルファチャンネルを付加する。アルファブレンドとは、2つの画像を係数(α値)により合成することである。また、アルファチャンネルとは、合成の際にマスク画像と呼ばれる、透過したい部分を定義した画像を用意し、それを基に、透過したい画像の透過を行うことである。画素にその情報を持たせて透過させることもあり、その情報のことをアルファチャンネルという。
ステップS3:各画像をCyan,Mazenta,Yellowのそれぞれに着色して統合する。
画像fのRGB値をf(R),f(G),f(B)と表現して各値は、0~255の整数値を取ることとする。
また、統合元画像(白色画像)をBase,各3方向にずらした画像をcmy,元画像をoriginal,結果画像をresultとする。 Step S2: An alpha channel is added to theoriginal image 20 in order to synthesize an offset image using an alpha blend. Alpha blending is to combine two images by coefficients (α value). Further, the alpha channel is to prepare an image which defines a portion to be transmitted, which is called a mask image at the time of composition, and performs transmission of the image to be transmitted based on it. The information may be given to pixels and transmitted, and the information is called an alpha channel.
Step S3: Each image is colored and integrated to each of Cyan, Magenta, and Yellow.
The RGB values of the image f are expressed as f (R), f (G) and f (B), and each value takes an integer value of 0 to 255.
Further, the integration source image (white image) is Base, the image shifted in each of the three directions is cmy, the original image is original, and the result image is a result.
ステップS3:各画像をCyan,Mazenta,Yellowのそれぞれに着色して統合する。
画像fのRGB値をf(R),f(G),f(B)と表現して各値は、0~255の整数値を取ることとする。
また、統合元画像(白色画像)をBase,各3方向にずらした画像をcmy,元画像をoriginal,結果画像をresultとする。 Step S2: An alpha channel is added to the
Step S3: Each image is colored and integrated to each of Cyan, Magenta, and Yellow.
The RGB values of the image f are expressed as f (R), f (G) and f (B), and each value takes an integer value of 0 to 255.
Further, the integration source image (white image) is Base, the image shifted in each of the three directions is cmy, the original image is original, and the result image is a result.
[式1]
result(i)=Base(i)-(α*(255-cmy(i))
(i∈R,G,B)
式1で示すように、Baseからピクセルごとの色とアルファブレンドの透明度αに応じ
て各cmyとoriginalとを引いてresultの各値を求める。
また、cmyを各3方向に応じた色に着色するためにcmyのRGB値の調整を行う。たとえばcmyをCyanに着色する場合、cmy(G),cmy(B)の値を共に、25
5にする。
これにより、Base(R)からcmy(R)値のみを引くため、Cyan色が再現される。
他のMazenta,Yellowの場合でも同様の調整を行う。この調整は、CMYにじみの場
合である。B&Wにじみの場合は、cmyのRGB値の調整無しに、式1により達成される。
ステップS4:元画像を用いた本来の文字列位置の再現が行われる。CMYにじみの場合、2色が全て重なる領域は白色になる。このため、最後にoriginalを用いて下記式2の計算を行う。これにより、3色が全て重なる領域が黒色となる。
[式2]
result(i)=Base(i)-((1-α)*(255-original(i))
(i∈R,G,B,original(i)=0)
以上の操作でcmy3枚が統合された結果画像と元のテキスト画像を統合して、にじみフィルタの結果画像を達成する。
また、にじみフィルタの強さは、下記式3の不透明度βによって制御することができる。
[式3]
β=1-α [Equation 1]
result (i) = Base (i)-(α * (255-cmy (i))
(I ∈ R, G, B)
As shown inEquation 1, each cmy and original are subtracted from Base in accordance with the color for each pixel and the transparency α of the alpha blend to obtain each value of result.
Also, in order to color cmy into colors according to each of the three directions, adjustment of the RGB values of cmy is performed. For example, when coloring cmy to Cyan, the values of cmy (G) and cmy (B) are both 25.
Set to 5
As a result, since only the cmy (R) value is subtracted from the Base (R), the cyan color is reproduced.
The same adjustment is made for other Magenta and Yellow. This adjustment is for CMY bleeds. In the case of B & W blur,Equation 1 is achieved without adjustment of cmy's RGB values.
Step S4: The original character string position is reproduced using the original image. In the case of CMY bleeding, the area where all the two colors overlap becomes white. For this reason, the calculation of thefollowing equation 2 is finally performed using original. Thereby, the area where all three colors overlap becomes black.
[Formula 2]
result (i) = Base (i)-((1-α) * (255-original (i))
(I ∈ R, G, B, original (i) = 0)
By the above operation, the result image in which cmy3 sheets are integrated and the original text image are integrated to achieve the result image of the bleeding filter.
Also, the strength of the blur filter can be controlled by the opacity β of thefollowing equation 3.
[Equation 3]
β = 1−α
result(i)=Base(i)-(α*(255-cmy(i))
(i∈R,G,B)
式1で示すように、Baseからピクセルごとの色とアルファブレンドの透明度αに応じ
て各cmyとoriginalとを引いてresultの各値を求める。
また、cmyを各3方向に応じた色に着色するためにcmyのRGB値の調整を行う。たとえばcmyをCyanに着色する場合、cmy(G),cmy(B)の値を共に、25
5にする。
これにより、Base(R)からcmy(R)値のみを引くため、Cyan色が再現される。
他のMazenta,Yellowの場合でも同様の調整を行う。この調整は、CMYにじみの場
合である。B&Wにじみの場合は、cmyのRGB値の調整無しに、式1により達成される。
ステップS4:元画像を用いた本来の文字列位置の再現が行われる。CMYにじみの場合、2色が全て重なる領域は白色になる。このため、最後にoriginalを用いて下記式2の計算を行う。これにより、3色が全て重なる領域が黒色となる。
[式2]
result(i)=Base(i)-((1-α)*(255-original(i))
(i∈R,G,B,original(i)=0)
以上の操作でcmy3枚が統合された結果画像と元のテキスト画像を統合して、にじみフィルタの結果画像を達成する。
また、にじみフィルタの強さは、下記式3の不透明度βによって制御することができる。
[式3]
β=1-α [Equation 1]
result (i) = Base (i)-(α * (255-cmy (i))
(I ∈ R, G, B)
As shown in
Also, in order to color cmy into colors according to each of the three directions, adjustment of the RGB values of cmy is performed. For example, when coloring cmy to Cyan, the values of cmy (G) and cmy (B) are both 25.
Set to 5
As a result, since only the cmy (R) value is subtracted from the Base (R), the cyan color is reproduced.
The same adjustment is made for other Magenta and Yellow. This adjustment is for CMY bleeds. In the case of B & W blur,
Step S4: The original character string position is reproduced using the original image. In the case of CMY bleeding, the area where all the two colors overlap becomes white. For this reason, the calculation of the
[Formula 2]
result (i) = Base (i)-((1-α) * (255-original (i))
(I ∈ R, G, B, original (i) = 0)
By the above operation, the result image in which cmy3 sheets are integrated and the original text image are integrated to achieve the result image of the bleeding filter.
Also, the strength of the blur filter can be controlled by the opacity β of the
[Equation 3]
β = 1−α
[比較実験]
にじみフィルタとぼかしフィルタとの比較実験を行った。比較実験では、にじみフィルタとぼかしフィルタとで、一度外した視線を移動前の元の位置に速やかに戻すことが可能かを検証した。なお、各フィルタの強さがパラメータの変更により異なる複数枚の画像を用意して、その差による影響を検証することが好ましい。ここでは、代表例を例示して説明する。 [Comparative experiment]
A comparative experiment was performed with the blur filter and the blur filter. In the comparative experiment, it was verified whether it was possible to quickly return the line of sight once removed to the original position before movement by the blur filter and the blur filter. In addition, it is preferable to prepare a plurality of images in which the strength of each filter is different according to the change of the parameter and to verify the influence of the difference. Here, a representative example is illustrated and demonstrated.
にじみフィルタとぼかしフィルタとの比較実験を行った。比較実験では、にじみフィルタとぼかしフィルタとで、一度外した視線を移動前の元の位置に速やかに戻すことが可能かを検証した。なお、各フィルタの強さがパラメータの変更により異なる複数枚の画像を用意して、その差による影響を検証することが好ましい。ここでは、代表例を例示して説明する。 [Comparative experiment]
A comparative experiment was performed with the blur filter and the blur filter. In the comparative experiment, it was verified whether it was possible to quickly return the line of sight once removed to the original position before movement by the blur filter and the blur filter. In addition, it is preferable to prepare a plurality of images in which the strength of each filter is different according to the change of the parameter and to verify the influence of the difference. Here, a representative example is illustrated and demonstrated.
比較実験の装置は、図1の視線誘導システムS1に、二点鎖線で示す顎のせ台27を加えたものと同等である。
出力モニタ24は、25インチ、解像度は1920×1080pixelである。出力モニ
タ24の下縁には、被験者の視線を検出する視線位置検出部5の複数のカメラ25が設けられていて、机26の下のPC7内の制御部1に接続されている。
出力モニタ24の前方には、出力モニタ24と、椅子28に着席した被験者の頭部との距離を一定に保つ顎のせ台27が設けられている。出力モニタ24の下部には、視線計測装置が設けられている。装置の置かれている室内は、照明点灯状態で、被験者の視線に入る範囲には、人がいない状態とした。
また、文字の大きさは18pt、字体は、Times New Roman Regularを使用した。 The device of the comparative experiment is equivalent to the gaze guidance system S1 of FIG. 1 plus ajaw rest 27 shown by a two-dot chain line.
The output monitor 24 has 25 inches and a resolution of 1920 × 1080 pixels. The lower edge of the output monitor 24 is provided with a plurality ofcameras 25 of a gaze position detection unit 5 that detects the gaze of the subject, and is connected to the control unit 1 in the PC 7 below the desk 26.
In front of theoutput monitor 24, a jaw rest 27 is provided which keeps the distance between the output monitor 24 and the head of the subject seated on the chair 28 constant. At the lower part of the output monitor 24, a gaze measuring device is provided. The room in which the device was placed was in a state of lighting and there was no human in the range where the subject's line of sight could be reached.
In addition, the size of the character was 18 pt, the font used Times New Roman Regular.
出力モニタ24は、25インチ、解像度は1920×1080pixelである。出力モニ
タ24の下縁には、被験者の視線を検出する視線位置検出部5の複数のカメラ25が設けられていて、机26の下のPC7内の制御部1に接続されている。
出力モニタ24の前方には、出力モニタ24と、椅子28に着席した被験者の頭部との距離を一定に保つ顎のせ台27が設けられている。出力モニタ24の下部には、視線計測装置が設けられている。装置の置かれている室内は、照明点灯状態で、被験者の視線に入る範囲には、人がいない状態とした。
また、文字の大きさは18pt、字体は、Times New Roman Regularを使用した。 The device of the comparative experiment is equivalent to the gaze guidance system S1 of FIG. 1 plus a
The output monitor 24 has 25 inches and a resolution of 1920 × 1080 pixels. The lower edge of the output monitor 24 is provided with a plurality of
In front of the
In addition, the size of the character was 18 pt, the font used Times New Roman Regular.
被験者は、20~30代の男性28名女性4名の計32名である。全ての被験者に眼の異常はなく、視力が悪い被験者は眼鏡やコンタクトレンズで矯正した状態で比較実験を行った。
There were a total of 32 subjects: 28 men in their 20s and 30s and 4 women. There were no eye abnormalities in all subjects, and subjects with poor visual acuity conducted comparative experiments with glasses and contact lenses corrected.
[実験手順]
図5は、比較実験の流れを示す図である。
図中左上に示すように、被験者3は出力モニタ24に表示された文章を音読する。音読開始地点は、初回のみ赤丸で文章内に明確に示される。その地点から音読を開始し、ピリオドが来た場合、スペースキーを押す。
図中右上に示すように、スペースキーが押し下げられると、出力モニタ24の画面が3秒間暗転して、その間は、画面上の四隅のいずれかに白い十字架29が表示され、被験者はその白い十字架29を見る。
図中右下に示すように、暗転が解除され文章が再び表示される。
図中左下に示すように、途中まで読んでいた文章の続きから音読を再開する。
以上を繰返し、規定範囲まで音読し終えたら最後にスペースキーを押した時点を終了とし、アンケート回答に移る。 [Experimental procedure]
FIG. 5 is a diagram showing the flow of comparative experiments.
As shown in the upper left of the figure, the subject 3 reads aloud the sentence displayed on theoutput monitor 24. The reading start point is clearly indicated in the text by the red circle only for the first time. Start reading from that point and press the space key when a period comes.
As shown in the upper right of the figure, when the space key is depressed, the screen of the output monitor 24 is darkened for 3 seconds, during which a white cross 29 is displayed at any of the four corners on the screen, and the subject is the white cross Look at 29
As shown in the lower right of the figure, the darkening is canceled and the sentence is displayed again.
As shown in the lower left of the figure, reading aloud resumes from the continuation of the text read halfway.
The above is repeated, and when the aloud to the specified range is finished, the point when the space key is pressed last is regarded as the end, and the questionnaire answer is shifted.
図5は、比較実験の流れを示す図である。
図中左上に示すように、被験者3は出力モニタ24に表示された文章を音読する。音読開始地点は、初回のみ赤丸で文章内に明確に示される。その地点から音読を開始し、ピリオドが来た場合、スペースキーを押す。
図中右上に示すように、スペースキーが押し下げられると、出力モニタ24の画面が3秒間暗転して、その間は、画面上の四隅のいずれかに白い十字架29が表示され、被験者はその白い十字架29を見る。
図中右下に示すように、暗転が解除され文章が再び表示される。
図中左下に示すように、途中まで読んでいた文章の続きから音読を再開する。
以上を繰返し、規定範囲まで音読し終えたら最後にスペースキーを押した時点を終了とし、アンケート回答に移る。 [Experimental procedure]
FIG. 5 is a diagram showing the flow of comparative experiments.
As shown in the upper left of the figure, the subject 3 reads aloud the sentence displayed on the
As shown in the upper right of the figure, when the space key is depressed, the screen of the output monitor 24 is darkened for 3 seconds, during which a white cross 29 is displayed at any of the four corners on the screen, and the subject is the white cross Look at 29
As shown in the lower right of the figure, the darkening is canceled and the sentence is displayed again.
As shown in the lower left of the figure, reading aloud resumes from the continuation of the text read halfway.
The above is repeated, and when the aloud to the specified range is finished, the point when the space key is pressed last is regarded as the end, and the questionnaire answer is shifted.
全10試行の内訳は、フィルタの種類と強さを変えて原文を3試行行い、ぼかしフィルタを1試行行い、にじみフィルタ2種(CMY,B&W)をにじみ大きさを変えて3試行づつ行った。 フィルタ効果は、被験者がスペースキーを押して、3秒間の暗転後、音読再開地点に視線を戻す際にその場所を強調するのに用いた。
実験中は、被験者の視線の計測を行い、被験者の視線が音読の再開地点に戻ると、強調表示を徐々に解除した。
この際の変化は、各フィルタに応じて予め20段階の強さに加工した画像を、1枚につき0.030~0.035秒表示し、全体で0.600~0.700秒の変化とした。また、被験者には、フィルタ加工が施されていないオリジナルの原文を表示する状況があることを予め伝えている。 The breakdown of all 10 trials changed the type and strength of the filter and made 3 trials, 1 trial of the blur filter, 3 trials of 2 types of the blur filter (CMY, B & W) and changed the blur size . The filter effect was used to emphasize the location when the subject pressed the space key, turned back to the reading resumption point after 3 seconds of darkening.
During the experiment, the subject's line of sight was measured, and when the subject's line of sight returned to the point where the reading was resumed, the highlighting was gradually released.
The change in this case is that the image processed into 20 levels of strength in advance according to each filter is displayed for 0.030 to 0.035 seconds per sheet, and the total change is 0.600 to 0.700 seconds. did. In addition, the subject is informed in advance that there is a situation in which the original original without filtering is displayed.
実験中は、被験者の視線の計測を行い、被験者の視線が音読の再開地点に戻ると、強調表示を徐々に解除した。
この際の変化は、各フィルタに応じて予め20段階の強さに加工した画像を、1枚につき0.030~0.035秒表示し、全体で0.600~0.700秒の変化とした。また、被験者には、フィルタ加工が施されていないオリジナルの原文を表示する状況があることを予め伝えている。 The breakdown of all 10 trials changed the type and strength of the filter and made 3 trials, 1 trial of the blur filter, 3 trials of 2 types of the blur filter (CMY, B & W) and changed the blur size . The filter effect was used to emphasize the location when the subject pressed the space key, turned back to the reading resumption point after 3 seconds of darkening.
During the experiment, the subject's line of sight was measured, and when the subject's line of sight returned to the point where the reading was resumed, the highlighting was gradually released.
The change in this case is that the image processed into 20 levels of strength in advance according to each filter is displayed for 0.030 to 0.035 seconds per sheet, and the total change is 0.600 to 0.700 seconds. did. In addition, the subject is informed in advance that there is a situation in which the original original without filtering is displayed.
[事前調査]
各フィルタの強さの最大値を決定するためにユーザが不自然に感じる、にじみ・ぼかしの閾値調査を事前に行った。
Times New Roman Regularの字体,60ptの文字の大きさで意味の持たない文字列を並べた画像を表示し、時間経過で徐々にフィルタの程度が強くなるように加工した。
事前調査は、21~24歳の男性18名,女性2名の被験者20名に対して行った。
そして、結果に基づいて変数σで制御されるガウシアンブラーを得る。得られた結果を各フィルタの強さの最大値として使用した。
これにより、不必要に強いフィルタが排除されて、実験が効率化される。 [preliminary survey]
In order to determine the maximum value of the strength of each filter, we performed a threshold survey of blur and blur in advance, which the user felt unnaturally.
We displayed an image in which meaningless character strings were arranged with the font of Times New Roman Regular and the character size of 60 pt, and processed so that the degree of the filter gradually became stronger with the passage of time.
The pre-survey was conducted on 18 subjects aged 21-24 years and 20 subjects of 2 women.
Then, based on the result, a Gaussian blur controlled by the variable σ is obtained. The obtained result was used as the maximum value of the strength of each filter.
This eliminates unnecessary strong filters and streamlines the experiment.
各フィルタの強さの最大値を決定するためにユーザが不自然に感じる、にじみ・ぼかしの閾値調査を事前に行った。
Times New Roman Regularの字体,60ptの文字の大きさで意味の持たない文字列を並べた画像を表示し、時間経過で徐々にフィルタの程度が強くなるように加工した。
事前調査は、21~24歳の男性18名,女性2名の被験者20名に対して行った。
そして、結果に基づいて変数σで制御されるガウシアンブラーを得る。得られた結果を各フィルタの強さの最大値として使用した。
これにより、不必要に強いフィルタが排除されて、実験が効率化される。 [preliminary survey]
In order to determine the maximum value of the strength of each filter, we performed a threshold survey of blur and blur in advance, which the user felt unnaturally.
We displayed an image in which meaningless character strings were arranged with the font of Times New Roman Regular and the character size of 60 pt, and processed so that the degree of the filter gradually became stronger with the passage of time.
The pre-survey was conducted on 18 subjects aged 21-24 years and 20 subjects of 2 women.
Then, based on the result, a Gaussian blur controlled by the variable σ is obtained. The obtained result was used as the maximum value of the strength of each filter.
This eliminates unnecessary strong filters and streamlines the experiment.
本実験では、画面に文章画像が表示されてから被験者(ユーザ)が音読を開始し、規定範囲の音読を完了し、スペースキーを押しさげるまでの時間と、その間の視線位置を調査した。また、1試行ごとにアンケートを行った。
表1は、視線誘導システムの効果を試験する実験の被験者に行ったアンケート結果の一例である。アンケートは4問用意し、それぞれの設問に対して1から5までの5段階のスコア分けを行った。ここでは、設問1と設問3とが好印象かを尋ねる内容になっており、設問2と設問4とが悪印象かを尋ねる内容である。
また、被験者と出力モニタ24との間の距離が60cmのときには、半径118pixel以下、好ましくは半径30pixelで、視線誘導領域を生成した。
In this experiment, after the text image was displayed on the screen, the subject (user) started reading aloud, completed reading a specified range, and examined the time until the space key was pressed and the gaze position in between. We also conducted a questionnaire every trial.
Table 1 is an example of the questionnaire result performed to the test subject of the experiment which tests the effect of a gaze guidance system. Four questions were prepared, and each question was divided into five grades from 1 to 5. Here, it is the contents asking whetherquestion 1 and question 3 are good impressions, and the question is asking whether question 2 and question 4 are bad impressions.
In addition, when the distance between the subject and the output monitor 24 is 60 cm, a gaze guidance area is generated with a radius of 118 pixels or less, preferably 30 pixels.
表1は、視線誘導システムの効果を試験する実験の被験者に行ったアンケート結果の一例である。アンケートは4問用意し、それぞれの設問に対して1から5までの5段階のスコア分けを行った。ここでは、設問1と設問3とが好印象かを尋ねる内容になっており、設問2と設問4とが悪印象かを尋ねる内容である。
また、被験者と出力モニタ24との間の距離が60cmのときには、半径118pixel以下、好ましくは半径30pixelで、視線誘導領域を生成した。
Table 1 is an example of the questionnaire result performed to the test subject of the experiment which tests the effect of a gaze guidance system. Four questions were prepared, and each question was divided into five grades from 1 to 5. Here, it is the contents asking whether
In addition, when the distance between the subject and the output monitor 24 is 60 cm, a gaze guidance area is generated with a radius of 118 pixels or less, preferably 30 pixels.
[実験結果]
実験結果をアンケート結果から考察する。
最初ににじみフィルタについて、CMYにじみ、B&Wにじみのそれぞれのアンケート結果を比較する。
a.CMYにじみフィルタ
図6は、CMYにじみフィルタのそれぞれのにじみの大きさのアンケート結果を示すものである。図6に示すように、CMYにじみフィルタのにじみの大きさの違い(0.5pixel,1.0pixel,1.5pixel)により、CMYにじみフィルタは、1.5pixelを用いると最も音読の再開が容易となり音読を続行しやすい。しかしながら、画像の加工も強く感じられている。
一方で、にじみの大きさが0.5pixelのときは、音読の再開地点の分かりやすさと文章の読み易さはそれほど損なわれず、かつ画像の加工は強くないという結果が得られた。また、3種類のにじみの大きさの中では一番邪魔にならない。
このため、自然な視線誘導という観点からにじみの大きさが0.5pixelのときのCMYにじみを全体のCMYにじみの代表として取り扱う。すなわち、特に指定がない限り、以降、ここでは、CMYにじみは、全て大きさが0.5pixelであるとする。 [Experimental result]
The experimental results are discussed from the questionnaire results.
First, with regard to the blur filter, the CMY blur and B & W blur questionnaire results are compared.
a. CMY Bleeding Filter FIG. 6 shows the result of the questionnaire of the magnitude of the bleeding of each of the CMY bleeding filters. As shown in FIG. 6, due to the difference in the size of the CMY blur filter (0.5 pixel, 1.0 pixel, 1.5 pixel), the CMY blur filter is most easy to resume reading when using 1.5 pixel. It is easy to continue reading aloud. However, processing of the image is also felt strongly.
On the other hand, when the size of the bleeding was 0.5 pixel, the result that the readability of the resumption point of reading and the readability of the sentence were not impaired so much and the processing of the image was not strong. Also, among the three types of bleeding, it is not the most disturbing.
Therefore, from the viewpoint of natural gaze guidance, the CMY blur when the blur size is 0.5 pixel is treated as a representative of the entire CMY blur. That is, hereinafter, CMY bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
実験結果をアンケート結果から考察する。
最初ににじみフィルタについて、CMYにじみ、B&Wにじみのそれぞれのアンケート結果を比較する。
a.CMYにじみフィルタ
図6は、CMYにじみフィルタのそれぞれのにじみの大きさのアンケート結果を示すものである。図6に示すように、CMYにじみフィルタのにじみの大きさの違い(0.5pixel,1.0pixel,1.5pixel)により、CMYにじみフィルタは、1.5pixelを用いると最も音読の再開が容易となり音読を続行しやすい。しかしながら、画像の加工も強く感じられている。
一方で、にじみの大きさが0.5pixelのときは、音読の再開地点の分かりやすさと文章の読み易さはそれほど損なわれず、かつ画像の加工は強くないという結果が得られた。また、3種類のにじみの大きさの中では一番邪魔にならない。
このため、自然な視線誘導という観点からにじみの大きさが0.5pixelのときのCMYにじみを全体のCMYにじみの代表として取り扱う。すなわち、特に指定がない限り、以降、ここでは、CMYにじみは、全て大きさが0.5pixelであるとする。 [Experimental result]
The experimental results are discussed from the questionnaire results.
First, with regard to the blur filter, the CMY blur and B & W blur questionnaire results are compared.
a. CMY Bleeding Filter FIG. 6 shows the result of the questionnaire of the magnitude of the bleeding of each of the CMY bleeding filters. As shown in FIG. 6, due to the difference in the size of the CMY blur filter (0.5 pixel, 1.0 pixel, 1.5 pixel), the CMY blur filter is most easy to resume reading when using 1.5 pixel. It is easy to continue reading aloud. However, processing of the image is also felt strongly.
On the other hand, when the size of the bleeding was 0.5 pixel, the result that the readability of the resumption point of reading and the readability of the sentence were not impaired so much and the processing of the image was not strong. Also, among the three types of bleeding, it is not the most disturbing.
Therefore, from the viewpoint of natural gaze guidance, the CMY blur when the blur size is 0.5 pixel is treated as a representative of the entire CMY blur. That is, hereinafter, CMY bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
b.B&Wにじみフィルタ
図7A~図7Dは、B&Wにじみフィルタのそれぞれのにじみの大きさのアンケート結果を示すものである。
B&Wにじみの大きさの違い(0.5pixel,1.0pixel,1.5pixel)により、B&Wにじみフィルタは、1.5pixelおよび0.5pixelを用いると最も音読の再開が容易となり音読を続行しやすい。
しかしながら、B&Wにじみフィルタは、1.5pixelの場合が3つの中で最も文書を読みにくいという結果となった。
また、加工の強さは、1.0pixel,1.5pixel,0.5pixelの順に強い印象を与えている。
以上の結果から、B&Wにじみの大きさが0.5pixelのときのB&Wにじみを全体のB&Wにじみの代表として取り扱う。すなわち、特に指定がない限り、以降、ここでは、B&Wにじみは、全て大きさが0.5pixelであるとする。 b. B & W Bleeding Filter FIGS. 7A to 7D show the results of questionnaires of the magnitudes of the B & W bleeding filters.
Because of the difference in B & W blur size (0.5 pixels, 1.0 pixels, 1.5 pixels), B & W blur filters are most easy to resume reading and continue reading when 1.5 pixels and 0.5 pixels are used.
However, the B & W blur filter resulted in the 1.5-pixel case being the hardest to read the document of the three.
Also, the processing strength gives strong impression in the order of 1.0 pixel, 1.5 pixel and 0.5 pixel.
From the above results, B & W bleeding when the size of B & W bleeding is 0.5 pixel is treated as a representative of the entire B & W bleeding. That is, hereinafter, B & W bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
図7A~図7Dは、B&Wにじみフィルタのそれぞれのにじみの大きさのアンケート結果を示すものである。
B&Wにじみの大きさの違い(0.5pixel,1.0pixel,1.5pixel)により、B&Wにじみフィルタは、1.5pixelおよび0.5pixelを用いると最も音読の再開が容易となり音読を続行しやすい。
しかしながら、B&Wにじみフィルタは、1.5pixelの場合が3つの中で最も文書を読みにくいという結果となった。
また、加工の強さは、1.0pixel,1.5pixel,0.5pixelの順に強い印象を与えている。
以上の結果から、B&Wにじみの大きさが0.5pixelのときのB&Wにじみを全体のB&Wにじみの代表として取り扱う。すなわち、特に指定がない限り、以降、ここでは、B&Wにじみは、全て大きさが0.5pixelであるとする。 b. B & W Bleeding Filter FIGS. 7A to 7D show the results of questionnaires of the magnitudes of the B & W bleeding filters.
Because of the difference in B & W blur size (0.5 pixels, 1.0 pixels, 1.5 pixels), B & W blur filters are most easy to resume reading and continue reading when 1.5 pixels and 0.5 pixels are used.
However, the B & W blur filter resulted in the 1.5-pixel case being the hardest to read the document of the three.
Also, the processing strength gives strong impression in the order of 1.0 pixel, 1.5 pixel and 0.5 pixel.
From the above results, B & W bleeding when the size of B & W bleeding is 0.5 pixel is treated as a representative of the entire B & W bleeding. That is, hereinafter, B & W bleeding is assumed to be all 0.5 pixels in size unless otherwise specified.
[各フィルタの比較]
図8A~図8Bは、視線誘導システムS1の効果のうち、にじみフィルタによるCMYにじみ,B&Wにじみ,ぼかしフィルタによるブラーぼかし、オリジナル画像について行ったアンケート結果である。被験者は32人で、平均スコアを横軸の設問の番号に対応させている。
文書の読み進めやすさと音読の再開地点の分かりやすさでは、B&Wにじみが最も良い結果となっている。
一方で画像の加工も一番強く感じるのは、B&Wにじみであり、自然に感じる度合いと文章の読み易さおよび再開地点の発見の容易さはトレードオフの関係にあることがわかる。 [Compare each filter]
FIGS. 8A to 8B show the results of the questionnaire performed on the original image, with CMY blur by the blur filter, B & W blur, blur blur by the blur filter among the effects of the line-of-sight guidance system S1. There are 32 subjects, and the average score corresponds to the question number on the horizontal axis.
B & W is the best result for the readability of the document and the readability of the reading point.
On the other hand, it is B & W that the image processing is most strongly felt, and it can be seen that there is a trade-off between the degree of natural feeling and the readability of sentences and the ease of finding restart points.
図8A~図8Bは、視線誘導システムS1の効果のうち、にじみフィルタによるCMYにじみ,B&Wにじみ,ぼかしフィルタによるブラーぼかし、オリジナル画像について行ったアンケート結果である。被験者は32人で、平均スコアを横軸の設問の番号に対応させている。
文書の読み進めやすさと音読の再開地点の分かりやすさでは、B&Wにじみが最も良い結果となっている。
一方で画像の加工も一番強く感じるのは、B&Wにじみであり、自然に感じる度合いと文章の読み易さおよび再開地点の発見の容易さはトレードオフの関係にあることがわかる。 [Compare each filter]
FIGS. 8A to 8B show the results of the questionnaire performed on the original image, with CMY blur by the blur filter, B & W blur, blur blur by the blur filter among the effects of the line-of-sight guidance system S1. There are 32 subjects, and the average score corresponds to the question number on the horizontal axis.
B & W is the best result for the readability of the document and the readability of the reading point.
On the other hand, it is B & W that the image processing is most strongly felt, and it can be seen that there is a trade-off between the degree of natural feeling and the readability of sentences and the ease of finding restart points.
CMYにじみに関しては、他のフィルタと比べて読み易さと再開地点の見つけやすさは低い結果となったが加工がないオリジナル画像の試行よりは良い得点を得ている。また、CMYにじみは、全てのフィルタの中で画像の加工を強く感じない、文章を読む際に画像加工を邪魔に感じないという結果を得た。
このため、ユーザに気づかれてもよい状況で読み易さを優先するなら、B&Wにじみが適しており、自然な視線誘導の達成を目的とするならCMYにじみが適しているといえる。 With regard to CMY bleeding, the readability and reusability of finding the reopening point are lower than those of the other filters, but the score is better than the trial of the original image without processing. In addition, CMY bleeding did not strongly sense the image processing in all the filters, and did not feel in the way of image processing when reading a sentence.
Therefore, if priority is given to readability in a situation that the user may notice, B & W bleeding is suitable, and CMY bleeding is suitable if aiming to achieve natural gaze guidance.
このため、ユーザに気づかれてもよい状況で読み易さを優先するなら、B&Wにじみが適しており、自然な視線誘導の達成を目的とするならCMYにじみが適しているといえる。 With regard to CMY bleeding, the readability and reusability of finding the reopening point are lower than those of the other filters, but the score is better than the trial of the original image without processing. In addition, CMY bleeding did not strongly sense the image processing in all the filters, and did not feel in the way of image processing when reading a sentence.
Therefore, if priority is given to readability in a situation that the user may notice, B & W bleeding is suitable, and CMY bleeding is suitable if aiming to achieve natural gaze guidance.
ブラーぼかしに関しても、B&Wにじみと似た傾向にある。ブラーぼかしは、B&Wにじみより加工は強く感じられないが、CMYにじみと比較すると加工が強い印象を与えている。画像加工に対して邪魔に感じる程度は、B&Wにじみと同等の結果であった。
Blur blur also tends to be similar to B & W blur. Blur blur is not felt stronger than B & W smearing, but it gives a strong impression of processing compared to CMY blur. The degree of feeling in the way of image processing was the same as B & W bleeding.
なお、画像加工が邪魔に感じるかという設問に対しては、全体的に低い得点結果であった。視線誘導終了後、視線を戻したら画像加工を徐々に解除するという仕組みが被験者の不快感やストレス緩和に寄与していることがわかる。
In addition, it was an overall low score result to the question whether it felt that image processing was disturbed. It can be seen that the mechanism of gradually canceling the image processing when the line of sight is returned after the end of the line of sight guidance contributes to the discomfort and the stress alleviation of the subject.
図9は、実施形態の視線誘導システムS1で、実験の順序を説明するフローチャートである。
視線誘導システムS1の制御部1で処理が開始(Start)されると、ステップS1で、視線誘導システムS1のフィルタ処理部9が本プロセスのROI(Region of Interestsの略語:興味のある領域、以降単にROIとも記す)以外にフィルタリングを適用する。 ステップS2で、視線位置検出部5がユーザ3の視線を検出する。
ステップS3で、制御部1は、ユーザ3の視線がROIに入ったか否かを判定する。
制御部1は、ユーザ3の視線がROIに入ったと判断した場合(ステップS3でYES)は、次のステップS4に処理を進め、ユーザ3の視線がROIに入っていないと判断した場合(ステップS3でNO)は、ステップS2に処理を戻す。
ステップS4で、フィルタ処理部9がフィルタリング(フィルタ処理)を徐々に解除する。
ステップS5では、視線位置検出部5がユーザ3の視線を検出する。
ステップS6では、視線位置検出部5によってユーザ3の視線が他のプロセス(ウィンドウ)に移動したか否かが判定される。ユーザ3の視線が他のプロセスに移動した(ステップS6でYES)と判定されると次のステップS7に処理を進め、ユーザ3の視線が他のプロセスに移動していない(ステップS6でNO)と判定されるとステップS5に処理を戻す。 FIG. 9 is a flowchart for explaining the order of experiments in the gaze guidance system S1 of the embodiment.
When the processing is started (started) by thecontrol unit 1 of the gaze guidance system S1, in step S1, the filter processing unit 9 of the gaze guidance system S1 is an ROI (abbreviation of Region of Interests: region of interest; Apply filtering other than just the ROI). In step S <b> 2, the gaze position detection unit 5 detects the gaze of the user 3.
In step S3, thecontrol unit 1 determines whether the line of sight of the user 3 has entered the ROI.
If thecontroller 1 determines that the line of sight of the user 3 has entered the ROI (YES in step S3), the process proceeds to the next step S4, and if it is determined that the line of sight of the user 3 does not enter the ROI (step S4) (NO at S3) returns the process to step S2.
In step S4, thefilter processing unit 9 gradually cancels the filtering (filtering process).
In step S5, the gazeposition detection unit 5 detects the gaze of the user 3.
In step S6, it is determined by the gazeposition detection unit 5 whether or not the gaze of the user 3 has moved to another process (window). If it is determined that the line of sight of user 3 has moved to another process (YES in step S6), the process proceeds to the next step S7, and the line of sight of user 3 has not moved to another process (NO in step S6) If it is determined that the condition is, the process returns to step S5.
視線誘導システムS1の制御部1で処理が開始(Start)されると、ステップS1で、視線誘導システムS1のフィルタ処理部9が本プロセスのROI(Region of Interestsの略語:興味のある領域、以降単にROIとも記す)以外にフィルタリングを適用する。 ステップS2で、視線位置検出部5がユーザ3の視線を検出する。
ステップS3で、制御部1は、ユーザ3の視線がROIに入ったか否かを判定する。
制御部1は、ユーザ3の視線がROIに入ったと判断した場合(ステップS3でYES)は、次のステップS4に処理を進め、ユーザ3の視線がROIに入っていないと判断した場合(ステップS3でNO)は、ステップS2に処理を戻す。
ステップS4で、フィルタ処理部9がフィルタリング(フィルタ処理)を徐々に解除する。
ステップS5では、視線位置検出部5がユーザ3の視線を検出する。
ステップS6では、視線位置検出部5によってユーザ3の視線が他のプロセス(ウィンドウ)に移動したか否かが判定される。ユーザ3の視線が他のプロセスに移動した(ステップS6でYES)と判定されると次のステップS7に処理を進め、ユーザ3の視線が他のプロセスに移動していない(ステップS6でNO)と判定されるとステップS5に処理を戻す。 FIG. 9 is a flowchart for explaining the order of experiments in the gaze guidance system S1 of the embodiment.
When the processing is started (started) by the
In step S3, the
If the
In step S4, the
In step S5, the gaze
In step S6, it is determined by the gaze
ステップS7では、最後の注視点周辺を本プロセスのROIに設定して、別プロセスの処理に移る。各ウィンドウ毎に前回このウィンドウを視線が去った際のROIを保持しているものと仮定する。すなわち、視線が去る時に最後の注視点を前記視線誘導領域のウィンドウのROIとして残していく。
これにより、ウィンドウに視線が戻ってくると、前回記録したROI以外をにじみ(ぼかし)はじめさせることができる。
したがって、マルチウインドウシステム等、複数の画面2a,2bや、複数の画面上の作業領域を往復しながら作業等を行う際に、自然な視線誘導を連続させて行うことができる。 In step S7, the area around the last fixation point is set to the ROI of this process, and the process proceeds to another process. It is assumed that, for each window, the ROI at the time when the line of sight left the window last time is held. That is, when the line of sight leaves, the last fixation point is left as the ROI of the window of the line-of-sight guidance area.
Thus, when the line of sight returns to the window, it is possible to start blurring (blurring) other than the previously recorded ROI.
Therefore, when performing work or the like while reciprocating a plurality of screens 2a and 2b and work areas on a plurality of screens such as a multi-window system, natural visual line guidance can be performed continuously.
これにより、ウィンドウに視線が戻ってくると、前回記録したROI以外をにじみ(ぼかし)はじめさせることができる。
したがって、マルチウインドウシステム等、複数の画面2a,2bや、複数の画面上の作業領域を往復しながら作業等を行う際に、自然な視線誘導を連続させて行うことができる。 In step S7, the area around the last fixation point is set to the ROI of this process, and the process proceeds to another process. It is assumed that, for each window, the ROI at the time when the line of sight left the window last time is held. That is, when the line of sight leaves, the last fixation point is left as the ROI of the window of the line-of-sight guidance area.
Thus, when the line of sight returns to the window, it is possible to start blurring (blurring) other than the previously recorded ROI.
Therefore, when performing work or the like while reciprocating a plurality of
ステップS8では、処理が終了したか否かが判定される。処理が終了していない場合(ステップS8にてNO)は、ステップS1に戻り、処理を繰り返す。処理が終了した場合(ステップS8にてYES)は、制御部1により一連の処理が終了する(END)。
In step S8, it is determined whether the process has ended. If the process is not completed (NO in step S8), the process returns to step S1 to repeat the process. When the process ends (YES in step S8), the control unit 1 ends the series of processes (END).
図10A~図10Cは、実施形態の視線誘導システムS1に用いられる文書の画像を例示するものである。なお、説明の容易化のため、一部強調等により模式化している。
図10Aは、処理を行う前のオリジナルの原図101である。たとえば中間色やグラディエーションによりコントラストが低くなっている絵図(図11参照)に比較して、際(境界)のコントラストが大きく、エッジの強い文字からなる文章である。
図10Bは、実施形態のCMYにじみフィルタによるにじみ処理を行った図である。視線誘導領域111の周縁の視線非誘導領域112では、黒色の文字が色ずれを伴ってにじみ処理されている。
図10Cは、比較例のガウシアンブラーフィルタによるぼかし処理を行った図である。図10B,図10Cを参照すると、視線誘導領域211に視線誘導効果はあるように見える。しかしながら、比較例の図10Cは十分な視線誘導効果を得ようとすると、視線非誘導領域212のフィルタが文字を難読化してしまう。
これに対して、実施形態の図10Bでは、CMYにじみフィルタによるにじみ処理が施されているので、違和感を感ずることなく、フィルタがかけられていることを意識せずに、視線非誘導領域112内の文字を読み続けることができる。 10A to 10C illustrate images of a document used for the gaze guidance system S1 of the embodiment. In addition, in order to facilitate the explanation, it is schematically illustrated by emphasizing in part.
FIG. 10A is an originaloriginal drawing 101 before processing. For example, it is a text composed of characters having a large edge (border) contrast and strong edges as compared with a pictorial diagram (see FIG. 11) in which the contrast is lowered due to a neutral color or gradation.
FIG. 10B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment. In the line-of-sight non-induction area 112 at the periphery of the line-of-sight guidance area 111, the black characters are subjected to blurring processing with color deviation.
FIG. 10C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example. Referring to FIGS. 10B and 10C, it appears that the gaze guidance effect is in thegaze guidance area 211. However, in the case of FIG. 10C of the comparative example, if it is attempted to obtain a sufficient gaze guidance effect, the filter of the gaze non-guiding region 212 obfuscates characters.
On the other hand, in FIG. 10B of the embodiment, the blur processing by the CMY blur filter is performed, so the sense of incongruity is not felt and the inside of the line ofsight non-induction region 112 is not conscious of the filter being applied. You can keep reading the letters.
図10Aは、処理を行う前のオリジナルの原図101である。たとえば中間色やグラディエーションによりコントラストが低くなっている絵図(図11参照)に比較して、際(境界)のコントラストが大きく、エッジの強い文字からなる文章である。
図10Bは、実施形態のCMYにじみフィルタによるにじみ処理を行った図である。視線誘導領域111の周縁の視線非誘導領域112では、黒色の文字が色ずれを伴ってにじみ処理されている。
図10Cは、比較例のガウシアンブラーフィルタによるぼかし処理を行った図である。図10B,図10Cを参照すると、視線誘導領域211に視線誘導効果はあるように見える。しかしながら、比較例の図10Cは十分な視線誘導効果を得ようとすると、視線非誘導領域212のフィルタが文字を難読化してしまう。
これに対して、実施形態の図10Bでは、CMYにじみフィルタによるにじみ処理が施されているので、違和感を感ずることなく、フィルタがかけられていることを意識せずに、視線非誘導領域112内の文字を読み続けることができる。 10A to 10C illustrate images of a document used for the gaze guidance system S1 of the embodiment. In addition, in order to facilitate the explanation, it is schematically illustrated by emphasizing in part.
FIG. 10A is an original
FIG. 10B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment. In the line-of-
FIG. 10C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example. Referring to FIGS. 10B and 10C, it appears that the gaze guidance effect is in the
On the other hand, in FIG. 10B of the embodiment, the blur processing by the CMY blur filter is performed, so the sense of incongruity is not felt and the inside of the line of
図11A~図11Dは、実施形態の視線誘導システムS1に用いられる絵図(写真)の画像を例示するものである。なお、説明の容易化のため、一部強調等により模式化している。
図11Aは、処理を行う前のオリジナルの原図201である。たとえば中間色やグラディエーションによりエッジが弱くなっている。このため、絵図は、際(境界)が捉えにくい。
図11Bは、実施形態のCMYにじみフィルタによるにじみ処理を行った図である。周縁の視線非誘導領域312は、色ずれを伴ってにじみ処理されている。しかしながら、視線誘導領域311と、視線非誘導領域312との区別はつきにくく、視線誘導効果は得にくい。
図11Cは、比較例のガウシアンブラーフィルタによるぼかし処理を行った図である。Cでは、視線誘導領域311と、視線非誘導領域312との区別がつきやすく、視線誘導効果が得られる。
このように、絵画や写真などのエッジが弱い原図では、ガウシアンブラーフィルタによるぼかし処理が視線誘導効果を得やすい。 11A to 11D illustrate images of pictures (photographs) used in the gaze guidance system S1 of the embodiment. In addition, in order to facilitate the explanation, it is schematically illustrated by emphasizing in part.
FIG. 11A is an originaloriginal drawing 201 before processing. For example, the edge is weak due to a neutral color or gradation. For this reason, the picture is hard to catch at the border (boundary).
FIG. 11B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment. The peripheral line ofsight non-induction region 312 is subjected to bleeding processing with color shift. However, it is difficult to distinguish between the gaze guidance region 311 and the gaze non-guide region 312, and it is difficult to obtain the gaze guidance effect.
FIG. 11C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example. In C, thegaze guidance area 311 and the gaze non-guiding area 312 can be easily distinguished, and the gaze guidance effect can be obtained.
As described above, in an original drawing having weak edges such as a picture or a picture, the blurring process using the Gaussian blur filter is likely to obtain the line-of-sight guidance effect.
図11Aは、処理を行う前のオリジナルの原図201である。たとえば中間色やグラディエーションによりエッジが弱くなっている。このため、絵図は、際(境界)が捉えにくい。
図11Bは、実施形態のCMYにじみフィルタによるにじみ処理を行った図である。周縁の視線非誘導領域312は、色ずれを伴ってにじみ処理されている。しかしながら、視線誘導領域311と、視線非誘導領域312との区別はつきにくく、視線誘導効果は得にくい。
図11Cは、比較例のガウシアンブラーフィルタによるぼかし処理を行った図である。Cでは、視線誘導領域311と、視線非誘導領域312との区別がつきやすく、視線誘導効果が得られる。
このように、絵画や写真などのエッジが弱い原図では、ガウシアンブラーフィルタによるぼかし処理が視線誘導効果を得やすい。 11A to 11D illustrate images of pictures (photographs) used in the gaze guidance system S1 of the embodiment. In addition, in order to facilitate the explanation, it is schematically illustrated by emphasizing in part.
FIG. 11A is an original
FIG. 11B is a diagram in which the bleeding process is performed by the CMY blur filter according to the embodiment. The peripheral line of
FIG. 11C is a diagram in which blurring processing is performed by the Gaussian blur filter of the comparative example. In C, the
As described above, in an original drawing having weak edges such as a picture or a picture, the blurring process using the Gaussian blur filter is likely to obtain the line-of-sight guidance effect.
図12は、本発明の視線誘導システムS1を適用する一例を示し、電子マーケットに適用する様子を示す模式図である。この実施例では、電子広告媒体500の画面上が視線誘導領域511と視線非誘導領域512とに区分される。
FIG. 12 is a schematic view showing an example of application of the gaze guidance system S1 of the present invention, which is applied to an electronic market. In this embodiment, the screen of the electronic advertisement medium 500 is divided into a gaze guidance area 511 and a gaze non-guiding area 512.
図13は、本発明の視線誘導システムS1を適用する一例を示し、デジタル・サイネージに適用する様子を示す模式図である。この実施例では、デジタル・サイネージ600の画面上が視線誘導領域611と視線非誘導領域612とに区分される。
FIG. 13 shows an example of application of the visual line guidance system S1 of the present invention, and is a schematic view showing application to digital signage. In this embodiment, the screen of the digital signage 600 is divided into a gaze guidance area 611 and a gaze non-guidance area 612.
図14は、本発明の視線誘導システムS1を適用する一例を示し、情報可視化に適用する様子を示す模式図である。この実施例では、立体化された画面上の3D表示700の画面上が視線誘導領域711と視線非誘導領域712とに区分される。
FIG. 14 is a schematic view showing an example of application of the visual line guidance system S1 of the present invention, which is applied to information visualization. In this embodiment, the screen of the 3D display 700 on the three-dimensionalized screen is divided into a gaze guidance area 711 and a gaze non-guidance area 712.
図15は、本発明の視線誘導システムS1を適用する一例を示し、電子書籍に適用する様子を示す模式図である。この実施例では、電子書籍800の画面上が視線誘導領域811と視線非誘導領域812とに区分される。
FIG. 15 is a schematic view showing an example of application of the visual line guidance system S1 of the present invention, which is applied to an electronic book. In this embodiment, the screen of the electronic book 800 is divided into a gaze guidance area 811 and a gaze non-guidance area 812.
図16および図17は、変形例の視線誘導システムS2を適用する一例を示し、実際の実ショーウインドウ900に適用する様子を示す模式図およびブロック図である。なお、前記実施形態の視線誘導システムS1と同一乃至均等な部分については同一符号を付して説明する。
図16に示すように、この視線誘導システムS2は、実施形態の視線誘導システムS1の横長モニタ2で構成される画面2a,2bに代えて、PC7で生成されたデータを投影する出力部102には、プロジェクタ装置としての外部プロジェクタ8が備えられている。出力部102が設けられる箇所は、実ショーウインドウ900の内外のいずれの箇所に設置されるものであってもよく、鏡やガラス等によって反射あるいは透過するものであってもよい。
外部プロジェクタ8は、実ショーウインドウ900に飾られている物品901~902に影像を投影する際、視線誘導領域911と視線非誘導領域912とを区別して投影する。 FIG. 16 and FIG. 17 show an example of applying the gaze guidance system S2 of the modification, and are a schematic diagram and a block diagram showing application to anactual show window 900. The same or equivalent parts as those of the visual line guidance system S1 of the above embodiment will be described with the same reference numerals.
As shown in FIG. 16, this gaze guidance system S2 is replaced by the screens 2a and 2b configured by the horizontally long monitor 2 of the gaze guidance system S1 of the embodiment, and is output to the output unit 102 that projects data generated by the PC7. An external projector 8 is provided as a projector device. The portion where the output unit 102 is provided may be installed anywhere inside or outside the actual show window 900, and may be reflected or transmitted by a mirror, glass or the like.
Theexternal projector 8 distinguishes and projects the gaze guidance area 911 and the gaze non-guiding area 912 when projecting the shadow image on the articles 901 to 902 displayed in the actual show window 900.
図16に示すように、この視線誘導システムS2は、実施形態の視線誘導システムS1の横長モニタ2で構成される画面2a,2bに代えて、PC7で生成されたデータを投影する出力部102には、プロジェクタ装置としての外部プロジェクタ8が備えられている。出力部102が設けられる箇所は、実ショーウインドウ900の内外のいずれの箇所に設置されるものであってもよく、鏡やガラス等によって反射あるいは透過するものであってもよい。
外部プロジェクタ8は、実ショーウインドウ900に飾られている物品901~902に影像を投影する際、視線誘導領域911と視線非誘導領域912とを区別して投影する。 FIG. 16 and FIG. 17 show an example of applying the gaze guidance system S2 of the modification, and are a schematic diagram and a block diagram showing application to an
As shown in FIG. 16, this gaze guidance system S2 is replaced by the
The
図17に示すように、このように構成された変形例の視線誘導システムS2では、実施形態の作用効果に加えてさらに、外部プロジェクタ8を用いて現実の商品に、にじみ(ぼかし)等の微妙にフィルタリングした映像を投影することができる。
たとえば、物品901、物品902に、にじみ(ぼかし)等の微妙にフィルタリングした映像を投影する。物品901、物品902は、現実物体の見え方が投影された映像によって変わる。これにより相対的に、視線誘導領域911内のにじみの投影されていない物品903は、浮き立ち、自然に物品903に人の視線が注がれるように視線を誘導することができる。
図12~図17に示す変形例では、特に述べた以外の他の構成および作用効果については、実施形態と同一乃至均等であるので、説明を省略する。 As shown in FIG. 17, in the visual line guidance system S2 of the modified example configured in this way, in addition to the effects of the embodiment, theexternal projector 8 is used to add subtleties such as blur to actual products. You can project the filtered image to.
For example, on thearticle 901 and the article 902, a finely filtered image such as blur is projected. The article 901 and the article 902 change the appearance of the real object according to the projected image. As a result, relatively, the unprojected article 903 in the line of sight guidance area 911 floats, and can naturally guide the line of sight so that the line of sight of a person is poured on the article 903.
The modification shown in FIGS. 12 to 17 is the same as or equivalent to those of the embodiment in the configuration and effects other than those specifically described, and thus the description thereof is omitted.
たとえば、物品901、物品902に、にじみ(ぼかし)等の微妙にフィルタリングした映像を投影する。物品901、物品902は、現実物体の見え方が投影された映像によって変わる。これにより相対的に、視線誘導領域911内のにじみの投影されていない物品903は、浮き立ち、自然に物品903に人の視線が注がれるように視線を誘導することができる。
図12~図17に示す変形例では、特に述べた以外の他の構成および作用効果については、実施形態と同一乃至均等であるので、説明を省略する。 As shown in FIG. 17, in the visual line guidance system S2 of the modified example configured in this way, in addition to the effects of the embodiment, the
For example, on the
The modification shown in FIGS. 12 to 17 is the same as or equivalent to those of the embodiment in the configuration and effects other than those specifically described, and thus the description thereof is omitted.
以上、本実施形態に係る視線誘導システムS1、S2および画像生成プログラムについて詳述してきたが、本発明はこれらの実施形態に限定されるものではなく、本発明の趣旨を逸脱しない範囲で適宜変更可能であることは言うまでもない。
The gaze guidance systems S1 and S2 and the image generation program according to the present embodiment have been described above in detail, but the present invention is not limited to these embodiments, and appropriate changes can be made without departing from the spirit of the present invention It goes without saying that it is possible.
例えば、本実施形態では、人工的な色ずれを利用した、にじみフィルタとして、CMYにじみフィルタを用いるものを示して説明してきたが特にこれに限らない。たとえば、RGBフィルタや黒一色でにじむB&Wにじみフィルタを用いてもよく、色ずれを伴うにじみ処理を行うものであれば、色にじみの数量、配色、および組み合せがどのようなものであってもよい。
For example, in the present embodiment, as the blur filter using an artificial color shift, the use of a CMY blur filter has been shown and described, but the present invention is not particularly limited thereto. For example, an RGB filter or a black and white B & W blur filter may be used, and the number, color scheme, and combination of blurs may be any as long as the blur processing with color shift is performed. .
また、各色をずらす方向およびずらし量についても、特に限定されない。たとえば、均等な角度で放射状にずらすものに限らず、不均等な角度で他方向にすらすものでもよく、それぞれずらすxピクセルが色ごとに異なっていてもよい。
Further, the directions of shifting the respective colors and the shifting amounts are not particularly limited. For example, it is not limited to the ones which are shifted at an equal angle radially, but may be ones which are not at equal angles but in other directions, and the x pixels which are shifted may be different for each color.
さらに、絵図と文字情報が混在している画面では、ブラーフィルタのブラー画像と、CMYにじみフィルタ画像とを一枚の画面の中に混在させてもよい。
また、色の透明度を色要素ごと変えて用途にあわせたにじみにて表示するようにしてもよい。 Furthermore, on a screen in which a picture and text information are mixed, a blur image of a blur filter and a CMY blur filter image may be mixed in one screen.
Further, the transparency of the color may be changed for each color element and displayed with a blur according to the application.
また、色の透明度を色要素ごと変えて用途にあわせたにじみにて表示するようにしてもよい。 Furthermore, on a screen in which a picture and text information are mixed, a blur image of a blur filter and a CMY blur filter image may be mixed in one screen.
Further, the transparency of the color may be changed for each color element and displayed with a blur according to the application.
そして、実施形態では、制御部1をフィルタ処理部9、領域区分部109、画像生成部209に分けて記載しているが、特にこれに限らず、制御部1内で一体となって処理を行うものであってもよい。さらに、制御部1も単数に限らず、例えば4個あるいは8個等、複数の処理装置が組み合わされて構成されていてもよく、制御部1および内部の処理装置の処理能力、数量および形式等が特に限定されるものではない。
And although the control part 1 is divided and described in the filter processing part 9, the area division part 109, and the image generation part 209 in the embodiment, the present invention is not limited to this, and the processing is integrated in the control part 1. It may be done. Furthermore, the control unit 1 is not limited to a single unit, and a plurality of processing units such as four or eight units may be combined, and the processing capability, quantity, type, etc. of the control unit 1 and the internal processing units Is not particularly limited.
さらに、CMYにじみフィルタおよび黒一色でにじむB&Wにじみフィルタ、あるいはガウシアンブラーフィルタなどを、文字部分、絵図部分の一部または全部にそれぞれ対応させてもよい。すなわち、種類の異なるフィルタ処理を行うフィルタ処理部9を複数設けて、領域区分部109で区分された領域ごとに異なるフィルタ処理を行うようにしてもよく、フィルタの種類および数量は、特に限定されるものではない。
Furthermore, a CMY blur filter, a black and white blur B & W blur filter, or a Gaussian blur filter may be made to correspond to part or all of the character part and the picture part, respectively. That is, a plurality of filter processing units 9 for performing different types of filter processing may be provided, and different filter processing may be performed for each area divided by the area dividing unit 109, and the type and number of filters are particularly limited. It is not a thing.
なお、変形例では、外部プロジェクタ8を用いて、フィルタ画像を、実ショーウインドウ900に飾られている物品901~902に投影するようにしているが、特にこれに限らず、実物体であれば、投影されるものの形状、数量および材質が特に限定されるものではない。
In the modification, the filter image is projected on the articles 901 to 902 displayed in the actual show window 900 using the external projector 8. However, the present invention is not limited to this, and it may be a real object. There are no particular limitations on the shape, number, and material of what is projected.
1 制御部
2 横長モニタ(出力部)
2a,2b 画面
3 ユーザ
4a~4c 視線
5 視線位置検出部
8 外部プロジェクタ(出力部)
9 フィルタ処理部
10 目視対象
11,111~911 視線誘導領域
12,112~912 視線非誘導領域
900 実ショーウインドウ
901~903物品
S1,S2 視線誘導システム 1control unit 2 horizontal monitor (output unit)
2a,2b Screen 3 Users 4a to 4c Line of sight 5 Line of sight position detection unit 8 External projector (output unit)
9filter processing unit 10 visual object 11, 111 to 911 gaze guidance region 12, 112 to 912 gaze non induction region 900 real show window 901 to 903 articles S1, S2 gaze guidance system
2 横長モニタ(出力部)
2a,2b 画面
3 ユーザ
4a~4c 視線
5 視線位置検出部
8 外部プロジェクタ(出力部)
9 フィルタ処理部
10 目視対象
11,111~911 視線誘導領域
12,112~912 視線非誘導領域
900 実ショーウインドウ
901~903物品
S1,S2 視線誘導システム 1
2a,
9
Claims (8)
- 目視対象を、ユーザが視線を向ける視線誘導領域と視線非誘導領域とに区分して、前記視線非誘導領域から前記視線誘導領域へ前記ユーザの視線を誘導するフィルタ処理部を備え、
前記フィルタ処理部は、少なくとも前記視線非誘導領域に、色ずれを伴うにじみ処理を行うことを特徴とする視線誘導システム。 The image processing apparatus further comprises a filter processing unit that divides the visual target into a gaze guidance area where the user looks at the gaze and a gaze non-guidance area, and guides the gaze of the user from the gaze non-guidance area to the gaze guidance area.
The line-of-sight guidance system, wherein the filter processing unit performs blur processing with color shift in at least the line-of-sight non-induction area. - 前記フィルタ処理部でフィルタ処理された生成データを表示する出力部を有する請求項1記載の視線誘導システム。 The gaze guidance system according to claim 1, further comprising an output unit for displaying the generated data filtered by the filter processing unit.
- 前記出力部は、モニタ装置であることを特徴とする請求項2記載の視線誘導システム。 The gaze guidance system according to claim 2, wherein the output unit is a monitor device.
- 前記出力部は、プロジェクタ装置であることを特徴とする請求項2記載の視線誘導システム。 The gaze guidance system according to claim 2, wherein the output unit is a projector device.
- 前記フィルタ処理部は、前記視線非誘導領域内で、にじみ処理を行う際、コントラストの大きいエッジ部分の元画像のC,M,Y成分の少なくともいずれか一成分をずらしたにじみ画像を用いることを特徴とする請求項1~4のいずれか一項に記載の視線誘導システム。 When the filtering processing unit performs bleeding processing in the non-visual-direction area, it uses a bleeding image obtained by shifting at least one of C, M, and Y components of the original image of the edge portion with high contrast. The gaze guidance system according to any one of claims 1 to 4, characterized in that
- 前記フィルタ処理部は、視線誘導終了後、前記視線非誘導領域の画像加工によるにじみ処理を徐々に解除することを特徴とする請求項1~4のいずれか一項に記載の視線誘導システム。 The line-of-sight guidance system according to any one of claims 1 to 4, wherein the filter processing unit gradually cancels the bleeding processing by image processing of the line-of-sight non-induction area after the end of line-of-sight guidance.
- 前記フィルタ処理部は、前記視線誘導領域から視線が去る際に最後の注視点を前記視線誘導領域のウィンドウのROIとして残しておくことを特徴とする請求項1~4のいずれか一項に記載の視線誘導システム。 The filter processing unit according to any one of claims 1 to 4, wherein when the line of sight leaves the line-of-sight guidance area, the last gaze point is left as the ROI of the window of the line-of-sight guidance area. Gaze guidance system.
- 目視対象を、ユーザが視線を向ける視線誘導領域と視線非誘導領域とに区分して、前記視線非誘導領域から前記視線誘導領域へ前記ユーザの視線を誘導するフィルタ処理部と、前記フィルタ処理部にてフィルタ処理された生成データを実物体へ投影するプロジェクタ装置とを備え、
前記フィルタ処理部は、少なくとも前記視線非誘導領域に、前記プロジェクタ装置から投影される影像を生成することを特徴とする視線誘導システム。 A filter processing unit that divides a visual target into a gaze guidance area and a gaze non-guidance area where the user directs the gaze, and guides the gaze of the user from the gaze non-guidance area to the gaze guidance area; A projector device for projecting the generated data filtered at step
The visual line guidance system, wherein the filter processing unit generates a shadow image projected from the projector device at least in the visual line non-induction region.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017229975A JP2019101137A (en) | 2017-11-30 | 2017-11-30 | Sight line guiding system |
JP2017-229975 | 2017-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019107507A1 true WO2019107507A1 (en) | 2019-06-06 |
Family
ID=66665654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/044043 WO2019107507A1 (en) | 2017-11-30 | 2018-11-29 | Line-of-sight guiding system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2019101137A (en) |
WO (1) | WO2019107507A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11161814A (en) * | 1997-11-25 | 1999-06-18 | Namco Ltd | Image generation device, simulation device and information storage medium |
JP2001223915A (en) * | 1999-12-01 | 2001-08-17 | Sharp Corp | Image processing method, image processor, and image forming device |
JP2004220179A (en) * | 2003-01-10 | 2004-08-05 | Toshiba Corp | Image generation device, image generation method, and image generation program |
JP2007172573A (en) * | 2005-09-28 | 2007-07-05 | Seiko Epson Corp | Document production system, document production method, program and storage medium |
JP2012109723A (en) * | 2010-11-16 | 2012-06-07 | Sharp Corp | Image processing device, image processing method, control program for image processing device, and computer readable recording medium with program recorded thereon |
WO2016151396A1 (en) * | 2015-03-20 | 2016-09-29 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
-
2017
- 2017-11-30 JP JP2017229975A patent/JP2019101137A/en active Pending
-
2018
- 2018-11-29 WO PCT/JP2018/044043 patent/WO2019107507A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11161814A (en) * | 1997-11-25 | 1999-06-18 | Namco Ltd | Image generation device, simulation device and information storage medium |
JP2001223915A (en) * | 1999-12-01 | 2001-08-17 | Sharp Corp | Image processing method, image processor, and image forming device |
JP2004220179A (en) * | 2003-01-10 | 2004-08-05 | Toshiba Corp | Image generation device, image generation method, and image generation program |
JP2007172573A (en) * | 2005-09-28 | 2007-07-05 | Seiko Epson Corp | Document production system, document production method, program and storage medium |
JP2012109723A (en) * | 2010-11-16 | 2012-06-07 | Sharp Corp | Image processing device, image processing method, control program for image processing device, and computer readable recording medium with program recorded thereon |
WO2016151396A1 (en) * | 2015-03-20 | 2016-09-29 | The Eye Tribe | Method for refining control by combining eye tracking and voice recognition |
Non-Patent Citations (1)
Title |
---|
"Visual Attention Guidance Using Image Resolution Control", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 56, no. 4, 1 May 2015 (2015-05-01), pages 1152 - 1162 * |
Also Published As
Publication number | Publication date |
---|---|
JP2019101137A (en) | 2019-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Deemer et al. | Low vision enhancement with head-mounted video display systems: are we there yet? | |
US10064547B2 (en) | Means and method for demonstrating the effects of low cylinder astigmatism correction | |
Leder | Determinants of preference: When do we like what we know? | |
Langlotz et al. | Real-time radiometric compensation for optical see-through head-mounted displays | |
Snyder | Image quality | |
Ananto et al. | Color transformation for color blind compensation on augmented reality system | |
Wolffsohn et al. | Image enhancement of real-time television to benefit the visually impaired | |
Lee et al. | Manipulation of colour and shape information and its consequence upon recognition and best-likeness judgments | |
Vasylevska et al. | Towards eye-friendly vr: how bright should it be? | |
JP2007010924A (en) | Picture display device | |
Zannoli et al. | The perceptual consequences of curved screens | |
Stone et al. | Alpha, contrast and the perception of visual metadata | |
WO2019107507A1 (en) | Line-of-sight guiding system | |
US10895749B2 (en) | Electronic glasses and method operating them | |
Wang et al. | System cross-talk and three-dimensional cue issues in autostereoscopic displays | |
Azuma et al. | A study on gaze guidance using artificial color shifts | |
JP2002315725A (en) | Visual acuity chart | |
Yamaura et al. | Image blurring method for enhancing digital content viewing experience | |
EP3903278B1 (en) | Device and method for enhancing images | |
Peli et al. | The Invisibility of Scotomas I: The Carving Hypothesis | |
Osei-Afriyie et al. | Influence of viewing distance and illumination on projection screen visual performance | |
TR201722651A2 (en) | METHOD AND IMAGING DEVICE | |
Miller | An empirical demonstration of the interactive influence of distance and flatness information on size perception in pictures | |
Zuffi et al. | Controlled and uncontrolled viewing conditions in the evaluation of prints | |
Stewart et al. | Visual impairment simulator for auditing and design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18882334 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18882334 Country of ref document: EP Kind code of ref document: A1 |