WO2018008128A1 - Video display device and method - Google Patents
Video display device and method Download PDFInfo
- Publication number
- WO2018008128A1 WO2018008128A1 PCT/JP2016/070166 JP2016070166W WO2018008128A1 WO 2018008128 A1 WO2018008128 A1 WO 2018008128A1 JP 2016070166 W JP2016070166 W JP 2016070166W WO 2018008128 A1 WO2018008128 A1 WO 2018008128A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- video display
- display device
- unit
- visual acuity
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
Definitions
- the present invention relates to a video display apparatus and method, and more particularly to a technology for improving screen visibility.
- Patent Document 1 discloses that “information is displayed on the display screen in a desired orientation according to the positional relationship between the display screen of the mobile terminal device and the user without newly providing a sensor that prevents downsizing of the mobile terminal device. ”
- a portable terminal device capable of displaying an image and for this purpose, “a main body having a display portion capable of displaying at least character information, and a camera provided on the main body and photographing the periphery of the main body”
- a main control unit that acquires information on a user's face based on an image captured by the camera unit and grasps at least a relative positional relationship between the orientation of the face and the orientation of the main body.
- the main control unit determines the direction of information to be displayed on the display screen of the display unit according to the grasped positional relationship, and controls the display control unit so that the information is displayed on the display unit according to the orientation (summary extract). " Mobile terminal device opened It is.
- Patent Document 1 Although the direction of information to be displayed can be changed in accordance with the direction of the face, there is a problem that the visibility when the characters on the screen are too small for the user is not improved.
- the present invention has been made in view of the above problems, and an object of the present invention is to improve the visibility of a screen according to a user's visual acuity.
- the present invention is a video display device that displays a visual acuity information acquisition unit that acquires visual acuity information of a user based on a user image obtained by imaging the user, and the visual acuity information of the user.
- An object enlargement processing unit that enlarges an object included in the video signal, and a display control unit that performs display control of the video signal.
- Figure showing an example of an indicator pattern Diagram showing the relationship between distance, visual acuity and magnification The flowchart which shows the timing of the character expansion rate determination process in a video display apparatus (television) It is the example of a screen display in 1st Embodiment, Comprising: (a) shows the screen by original size, (b) shows the example of a screen after an expansion process.
- video display method which concerns on 3rd Embodiment The flowchart which shows the video display method which concerns on 4th Embodiment. It is a screen display example displayed in 4th Embodiment, Comprising: (a) shows the display screen of original size, (b) shows the screen of an enlarged display process.
- FIG. 1 is a diagram illustrating a hardware configuration of the video display apparatus.
- the video display device 10 is connected to a camera 20 that images a user, and a user image captured by the camera 20 is input to the video display device 10.
- the video display device 10 may be a portable video terminal device such as a projector or a smartphone as long as it is a device that can display a video signal.
- a television is taken as an example. Therefore, in this embodiment, it is connected to the receiving antenna 31 that receives a digital broadcast wave. Instead of a receiving antenna, a broadcast signal may be received from a cable television. Further, it is configured to be electrically connectable to a video playback device 32 such as a DVD recorder.
- a video display device 10 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, a HDD (Hard Disk Drive) 14, a monitor 15, and an input device. 16 (including operation buttons and remote control receiver), I / F 17, and bus 18.
- the CPU 11, RAM 12, ROM 13, HDD 14, monitor 15, input device 16, and I / F 17 are connected to each other via a bus 18.
- the ROM 13 and the HDD 14 are examples of storage.
- the storage may be built in the video display device 10 or may be a portable memory that can be attached to and detached from the video display device 10.
- the camera 20, the receiving antenna 31, and the video playback device 32 are connected to the video display device 10 via the I / F 17.
- the receiving antenna 31 and the video reproduction device 32 correspond to the input destination of the video signal to the video display device 10.
- FIG. 2 is a functional block diagram showing functions of the video display device 10.
- the video display device 10 includes a distance measuring unit 101, a face region extracting unit 102, an eye region extracting unit 103, a visual acuity measuring unit 104, an age estimating unit 105, a spectacles detecting unit 106, a contact lens detecting unit 107, and a DVD (Digital Versatile Disk).
- Drive 108 caption information extraction unit 109, object enlargement processing unit 110, object recognition unit 111, decoder 112, broadcast signal reception unit 113, display control unit 114, EPG (Electronic Program Guide) generation unit 115, and main control unit 116
- An application program as a component is provided.
- the visual acuity measuring unit 104, the age estimating unit 105, the eyeglass detecting unit 106, and the contact lens detecting unit 107 are information indicating the actual measured value of the user's visual acuity or information for estimating visual acuity (the information on the wearing of the visual acuity correction device and estimated age information). Output), which corresponds to the vision information acquisition unit.
- the eyeglass detection unit 106 and the contact lens detection unit 107 correspond to a correction device detection unit because they detect whether or not a vision correction device (including glasses and contact lenses) is attached.
- Correction device wearing information indicating whether or not the user is wearing a vision correction device and estimated user age information are stored in the storage. Therefore, a partial area of the storage constitutes a correction device mounting information storage unit.
- the video display apparatus 10 stores an application program in a storage.
- the main control unit 116 reads the program from the storage and develops the program in the RAM 12, and the main control unit 116 executes the program to realize various functions. be able to.
- the application program may be stored in the storage in advance before the video display device 10 is shipped, or may be stored in an optical medium such as a CD (Compact Disk) / DVD or a medium such as a semiconductor memory and connected to the medium. May be installed in the video display device 10 via a unit (a DVD drive is an aspect of a medium connection unit).
- the application program can also be realized by hardware (IC chip or the like) having the same function. When implemented as hardware, each processing unit takes the lead in realizing each function.
- FIG. 3 is an external view of the video display device 10.
- the video display device 10 displays an image or video on the monitor 15.
- the camera 20 may be configured integrally with the video display device 10 or may be configured separately, but is connected to the video display device 10 wirelessly or by wire.
- the camera 20 is installed at an angle of view where the face of the user who views the screen of the monitor 15 is captured.
- the distance between the user and the video display device 10 is measured (estimated if configured separately) based on the user image captured by the camera 20, and automatically on the screen based on this distance.
- the object size such as characters and objects is changed and displayed on the monitor 15.
- the object here refers to character information such as subtitles and EPG from the first embodiment to the third embodiment, and refers to a partial area on the screen of the monitor 15 in the fourth embodiment.
- FIG. 4 is a flowchart showing the video display method according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of an index pattern.
- FIG. 6 is a diagram illustrating the relationship among distance, visual acuity, and magnification.
- FIG. 7 is a flowchart showing the timing of character enlargement rate determination processing in the video display device (television).
- FIGS. 8A and 8B are screen display examples according to the present embodiment.
- FIG. 8A shows a screen according to the original size
- FIG. 8B shows a screen after enlargement processing.
- the visual acuity measurement unit 104 displays an index pattern image for measuring the visual acuity of the user on the monitor 15 (S01).
- the index pattern is, for example, a point index, a ring index, a slit index, or the like.
- FIG. 5 shows a state in which the slit index pattern 121 is displayed on the monitor.
- the camera 20 captures the user and generates a user image (S02).
- the eye area extraction unit 103 extracts the eye area where the user's eyes are captured from the user image (S03).
- the eye area extraction unit 103 detects the shape of white eyes and black eyes of the extracted eye area.
- the index pattern is reflected in the extracted eye area. Therefore, the visual acuity measurement unit 104 analyzes the index pattern image projected onto the eye region, acquires corneal shape distribution information based on the analysis result, and calculates the refractive index, thereby measuring the visual acuity of the user ( S04).
- the ranging unit 101 measures the distance between the user and the video display device 10 (S05). In the present embodiment, the distance measuring unit 101 measures the distance based on the image from the camera 20. The distance between the user's eye position and the screen is proportional to the distance between the user's eyes and the screen. Therefore, the distance measuring unit 101 calculates the distance between both eyes based on the extracted eye region, and calculates the distance to the screen based on the calculated distance.
- the distance measurement unit 101 executes a distance calculation process based on the parallax of the stereo camera based on the face image captured by the face region extraction unit 102 with the stereo camera.
- a distance measuring device different from the camera 20 for example, an ultrasonic device may be connected to the video display device 10, and the distance measuring unit 101 may measure the distance to the user based on the output signal from the distance measuring device.
- the distance measuring unit 101 measures the distance to each user based on the eye area of each user extracted by the eye region extracting unit 103 and sets the farthest position from the video display device 10. You may comprise so that the distance to the user who enters may be output as distance information.
- the object enlargement processing unit 110 determines the enlargement ratio of the object displayed on the monitor 15 based on the distance information and the visual acuity information (S06).
- the object enlargement rate is also referred to as a character enlargement rate.
- the object enlargement processing unit 110 determines that the user is away from the screen and displays the enlarged character size.
- the distance is short, it is determined that the user is not away from the screen, and the character size is reduced or displayed as the original size included in the video.
- a threshold based on the screen size may be set so that all characters included in the video are displayed when the object is enlarged and an upper limit may be set for the enlargement ratio.
- the screen size may be manually input to the video display device 10 in advance, or may be automatically calculated from the size of the screen captured by the camera 20 using a wide-angle lens.
- the shape of the subtitles includes vertical, horizontal and full screen types.
- the object size may be displayed with an upper limit based on the size of the display screen.
- the object size may be changed only when the font size can be enlarged or reduced, such as EPG.
- the broadcast signal receiving unit 113 receives a broadcast signal via the receiving antenna 31, and the decoder 112 decodes the broadcast signal to generate a video signal.
- the object recognition unit 111 performs character recognition processing on the video signal, and expands the character information according to the enlargement rate determined by the object enlargement processing unit 110 in step S06.
- the display control unit 114 displays the video signal with enlarged characters on the monitor 15 (S07).
- the EPG generation unit 115 when the EPG generation unit 115 receives an EPG signal included in the broadcast signal, the EPG generation unit 115 generates an EPG.
- the object enlargement processing unit 110 enlarges the font size of the characters included in the EPG at the enlargement rate determined in S ⁇ b> 06, and outputs it to the display control unit 114 for display on the monitor 15. Is done.
- the subtitle information extraction unit 109 outputs the subtitle information to the object enlargement processing unit 110.
- the subtitle characters are enlarged and displayed.
- the moving image signal may be directly output to the display control unit 114, and the display control unit 114 may display the monitor 15 by executing a process of combining the moving image and the caption.
- the TV is turned on and the TV is initially set (S11 / YES).
- the character enlargement rate determination process (S01 to S06) is executed (S12), and the identification information of the user (for example, the shape of the white and black eyes performed by the eye region extraction unit 103 may be used) is associated with the measured visual acuity. And store it in the storage. Then, the power is turned off.
- the character enlargement rate determination process and the enlarged display process may be executed (S14). ).
- step S14 when a television program is displayed on the screen, the character enlargement ratio is determined during a standby time (usually when the monitor is dark) until the television program is displayed after the power is turned on.
- the display of the program is started, the characters may be enlarged and displayed.
- the process of changing the channel and the input device 16 accepting the input device 16 as a trigger again performs the visual acuity measurement and the enlargement ratio determination process. You may comprise.
- FIG. 8 a comparative example before and after the expansion of character information is shown.
- the character information 82 and 83 are displayed in the original size and are relatively small.
- the character information 86 and 87 are enlarged, but the character string is within the screen 85.
- the video display device can measure the user's visual acuity, and the character can be enlarged and displayed at an enlargement ratio corresponding to the visual acuity and the distance.
- the magnification is determined by estimating the age from the face image without measuring the visual acuity.
- FIG. 9 is a flowchart showing a video display method according to the second embodiment.
- FIG. 10 is a diagram illustrating a correspondence relationship between the estimated age and the enlargement rate.
- the face area extraction unit 102 After capturing the user with the camera 20 (S21), the face area extraction unit 102 extracts an area (face area) where the user's face is imaged from the captured image (S22).
- the age estimation unit 105 estimates the age using features such as eyes, nose, mouth, hair, and contour in the face area (S23), and outputs them to the object enlargement processing unit 110.
- FIG. 10 shows the enlargement ratio data in which the distance / age and the enlargement ratio are associated with each other.
- the object enlargement processing unit 110 refers to the enlargement rate data, determines the character enlargement rate according to the distance and the estimated age (S24), and enlarges and displays the character information according to the determined enlargement rate (S25).
- the enlargement ratio data in FIG. Until the threshold based on the screen size is reached, the enlargement rate is defined by a continuous increase function according to the age, but an age threshold is set for the estimated age, the original size is displayed below the age threshold, and a predetermined enlargement is shown above the age threshold You may make it perform an enlarged display at a rate.
- the estimated age of the user is calculated, and the enlargement ratio of the character information is determined using the result, so that an easy-to-see screen with enlarged characters can be displayed especially for elderly viewers. .
- the third embodiment is an embodiment that performs enlarged display of characters based on the presence or absence of glasses or contact lenses.
- the third embodiment will be described below with reference to FIG.
- FIG. 11 is a flowchart showing a video display method according to the third embodiment.
- the room is imaged with the camera 20 from around the day, the face of the user is imaged and the face recognition process is executed, and learning data that defines the habit of wearing glasses for each user is accumulated in the storage (S31).
- the target user for character enlargement display is imaged by the camera 20 (S32). This process may be executed, for example, as a trigger when an operation to turn on the main power of the television is accepted.
- the face area extraction unit 102 extracts the user's face area from the captured image (S33).
- the object enlargement processing unit 110 collates the face area with the learning data, and determines whether the target user is a person who has a habit of wearing glasses.
- the glasses detection unit 106 has a geometric shape such as a frame in the face area based on the shape and color of the glasses, for example.
- the eyeglasses detection process is executed based on whether the image is captured or whether a color different from the human body is included.
- the contact lens detection unit 107 determines whether the contact lenses are worn.
- the eye region extraction unit 103 extracts the eye region, and if the contact lens detection unit 107 can detect, for example, a geometric circular shape (corresponding to the shape of the contact lens) in the eye region, a contact lens is attached. (S36 / YES).
- the object enlargement processing unit 110 enlarges the character using a predetermined enlargement ratio ⁇ ( ⁇ > 1).
- the display control unit 114 displays on the monitor 15 (S37).
- the object enlargement processing unit 110 does not enlarge the characters. Display in the original size (same size).
- the present embodiment it is possible to visually recognize the object on the screen even when the glasses are usually worn and the glasses are rarely worn.
- the fourth embodiment is an embodiment in which a user's line of sight is detected and a partial area in the screen that the user is viewing is enlarged and displayed.
- FIG. 12 is a flowchart showing a video display method according to the fourth embodiment.
- FIGS. 13A and 13B are screen display examples displayed in the fourth embodiment, where FIG. 13A shows an original size screen and FIG. 13B shows an enlarged display processing screen.
- the user is imaged by the camera 20 (S41), the eye area extraction unit 103 extracts the eye area in the captured image, and calculates the user's line-of-sight direction from the position of the black eye (S42).
- the positional relationship between the camera 20 and the video display device 10 is fixed.
- the object recognizing unit 111 calculates the partial area in the screen that the user is viewing by specifying the area of the monitor 15 that is ahead of the user's line of sight (S43).
- the object enlargement processing unit 110 determines an object enlargement rate (S44).
- the enlargement ratio of the object in this step is the same as the enlargement ratio described as the character information enlargement ratio in the first to third embodiments. Therefore, any of the magnification rate based on the measurement result and age of the visual acuity and whether or not the visual acuity correction device is attached may be used.
- the object enlargement processing unit 110 enlarges and displays the partial area included in the video signal to be displayed according to the determined enlargement ratio (S45).
- the partial area A1 (see FIG. 13A) that is visually recognized by the user on the screen of the monitor 15 is enlarged and displayed larger than the other areas by the enlargement display process. (See FIG. 13B).
- the object enlargement processing unit 110 may control the character so that it can be easily seen by changing not only the size of the character but also the contrast of the background.
- the contrast since the enlargement ratio of the object size is the upper limit value, even when the enlargement cannot be performed, the characters can be seen more easily.
- the object size of the screen may be changed by applying this embodiment to a car navigation system, a projector, or the like.
- each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor.
- control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Provided is a video display device capable of improving visibility on a screen in accordance with a user's eyesight. To this end, the video display device 10 includes: an eyesight information acquisition unit 104 for acquiring the eyesight information of a user on the basis of a user image obtained by capturing an image of the user; an object magnification unit 110 for magnifying an object included in a video signal to be displayed on the basis of the eyesight information of the user; and a display control unit 114 for controlling the display of the video signal.
Description
本発明は、映像表示装置及び方法に関し、特に画面視認性の向上技術に関する。
The present invention relates to a video display apparatus and method, and more particularly to a technology for improving screen visibility.
特許文献1には「携帯端末装置の小型化を妨げるようなセンサを新たに設けることなく、携帯端末装置の表示部の表示画面と使用者との位置関係に応じて望ましい向きで表示画面に情報を表示することが可能な携帯端末装置を提供すること」を課題とし、そのために「少なくとも文字情報を表示可能な表示部を有する本体と、該本体に設けられ、当該本体の周囲を撮影するカメラ部と、該カメラ部が撮影した画像に基づいて使用者の顔の情報を取得し、少なくとも顔の向きと前記本体の向きとの相対的な位置関係を把握する主制御部と、を備える。主制御部は、把握した位置関係に従って表示部の表示画面に表示する情報の向きを決定し、その向きで該情報が表示部に表示されるように表示制御部を制御する(要約抜粋)」携帯端末装置が開示されている。
特許文献1によれば、顔の向きに合わせて表示する情報の向きを変えることはできるものの、ユーザにとって画面の文字が小さすぎる場合の視認性は改善しないという課題がある。
According to Patent Document 1, although the direction of information to be displayed can be changed in accordance with the direction of the face, there is a problem that the visibility when the characters on the screen are too small for the user is not improved.
本発明は上記課題に鑑みてなされたものであり、ユーザの視力に応じて画面の視認性の向上を図ることを目的とする。
The present invention has been made in view of the above problems, and an object of the present invention is to improve the visibility of a screen according to a user's visual acuity.
上記目的を解決するために、請求の範囲に記載の構成を採用する。その一態様として、例えば本発明は、映像表示装置であって、ユーザを撮像したユーザ画像に基づいてユーザの視力情報を取得する視力情報取得部と、前記ユーザの視力情報に基づいて、表示する映像信号に含まれるオブジェクトを拡大するオブジェクト拡大処理部と、前記映像信号の表示制御を行う表示制御部と、を備えることを特徴とする。
In order to solve the above object, the structure described in the claims is adopted. As one aspect thereof, for example, the present invention is a video display device that displays a visual acuity information acquisition unit that acquires visual acuity information of a user based on a user image obtained by imaging the user, and the visual acuity information of the user. An object enlargement processing unit that enlarges an object included in the video signal, and a display control unit that performs display control of the video signal.
本発明によれば、ユーザの視力に応じて画面の視認性の向上を図ることができる。上記以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。
According to the present invention, it is possible to improve the visibility of the screen according to the visual acuity of the user. Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
以下、本発明の実施形態を、図面を用いて説明する。
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
<映像表示装置のハードウェア構成>
図1を参照して本実施形態に係る映像表示装置のハードウェア構成について説明する。図1は、映像表示装置のハードウェア構成を示す図である。 <Hardware configuration of video display device>
A hardware configuration of the video display apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 is a diagram illustrating a hardware configuration of the video display apparatus.
図1を参照して本実施形態に係る映像表示装置のハードウェア構成について説明する。図1は、映像表示装置のハードウェア構成を示す図である。 <Hardware configuration of video display device>
A hardware configuration of the video display apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 is a diagram illustrating a hardware configuration of the video display apparatus.
映像表示装置10は、ユーザを撮像するカメラ20に接続され、カメラ20が撮像したユーザ画像が映像表示装置10に入力される。映像表示装置10は映像信号を表示できる装置であればその種類を問わず、プロジェクターやスマートフォンなどの携帯映像端末装置でもよいが、本実施形態ではテレビを例に挙げて説明する。そのため、本実施形態では、デジタル放送波を受信する受信アンテナ31に接続される。受信アンテナに代わり、ケーブルテレビから放送信号を受信してもよい。更にDVDレコーダ等の映像再生装置32にも電気的に接続可能に構成される。
The video display device 10 is connected to a camera 20 that images a user, and a user image captured by the camera 20 is input to the video display device 10. The video display device 10 may be a portable video terminal device such as a projector or a smartphone as long as it is a device that can display a video signal. In the present embodiment, a television is taken as an example. Therefore, in this embodiment, it is connected to the receiving antenna 31 that receives a digital broadcast wave. Instead of a receiving antenna, a broadcast signal may be received from a cable television. Further, it is configured to be electrically connectable to a video playback device 32 such as a DVD recorder.
図1に示すように、映像表示装置10は、CPU(Central Processing Unit)11、RAM(Random Access Memory)12、ROM(Read Only Memory)13、HDD(Hard Disk Drive)14、モニタ15、入力装置(操作ボタンやリモコン受信部を含む)16、I/F17、及びバス18を含む。そして、CPU11、RAM12、ROM13、HDD14、モニタ15、入力装置16、及びI/F17がバス18を介して互いに接続されて構成される。ROM13、HDD14はストレージの一例である。ストレージは映像表示装置10に内蔵されたものでもよいし、映像表示装置10に着脱可能な可搬型のメモリでもよい。
As shown in FIG. 1, a video display device 10 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, a HDD (Hard Disk Drive) 14, a monitor 15, and an input device. 16 (including operation buttons and remote control receiver), I / F 17, and bus 18. The CPU 11, RAM 12, ROM 13, HDD 14, monitor 15, input device 16, and I / F 17 are connected to each other via a bus 18. The ROM 13 and the HDD 14 are examples of storage. The storage may be built in the video display device 10 or may be a portable memory that can be attached to and detached from the video display device 10.
カメラ20、受信アンテナ31、映像再生装置32は、I/F17を介して映像表示装置10に接続される。受信アンテナ31、映像再生装置32は映像表示装置10に対する映像信号の入力先に相当する。
The camera 20, the receiving antenna 31, and the video playback device 32 are connected to the video display device 10 via the I / F 17. The receiving antenna 31 and the video reproduction device 32 correspond to the input destination of the video signal to the video display device 10.
次に、図2を参照して、映像表示装置10のプログラム構成について説明する。図2は、映像表示装置10の機能を示す機能ブロック図である。
Next, the program configuration of the video display device 10 will be described with reference to FIG. FIG. 2 is a functional block diagram showing functions of the video display device 10.
映像表示装置10は、測距部101、顔領域抽出部102、目領域抽出部103、視力測定部104、年齢推定部105、眼鏡検出部106、コンタクトレンズ検出部107、DVD(Digital Versatile Disk)ドライブ108、字幕情報抽出部109、オブジェクト拡大処理部110、オブジェクト認識部111、デコーダ112、放送信号受信部113、表示制御部114、EPG(Electronic Program Guide)生成部115、及び主制御部116の構成要素となるアプリケーションプログラムを備える。視力測定部104、年齢推定部105、眼鏡検出部106、コンタクトレンズ検出部107はユーザの視力の実測値を示す情報又は視力を推定するための情報(視力矯正装置の装着情報及び推定年齢情報を含む)を出力するので視力情報取得部に相当する。また、眼鏡検出部106、コンタクトレンズ検出部107は、視力矯正装置(眼鏡及びコンタクトレンズを含む)の装着の有無を検出するので矯正装置検出部に相当する。ユーザが視力矯正装置を装着する者であるか否かを示す矯正装置装着情報やユーザの推定年齢情報はストレージに格納される。よって、ストレージの一部領域は矯正装置装着情報記憶部を構成する。
The video display device 10 includes a distance measuring unit 101, a face region extracting unit 102, an eye region extracting unit 103, a visual acuity measuring unit 104, an age estimating unit 105, a spectacles detecting unit 106, a contact lens detecting unit 107, and a DVD (Digital Versatile Disk). Drive 108, caption information extraction unit 109, object enlargement processing unit 110, object recognition unit 111, decoder 112, broadcast signal reception unit 113, display control unit 114, EPG (Electronic Program Guide) generation unit 115, and main control unit 116 An application program as a component is provided. The visual acuity measuring unit 104, the age estimating unit 105, the eyeglass detecting unit 106, and the contact lens detecting unit 107 are information indicating the actual measured value of the user's visual acuity or information for estimating visual acuity (the information on the wearing of the visual acuity correction device and estimated age information). Output), which corresponds to the vision information acquisition unit. The eyeglass detection unit 106 and the contact lens detection unit 107 correspond to a correction device detection unit because they detect whether or not a vision correction device (including glasses and contact lenses) is attached. Correction device wearing information indicating whether or not the user is wearing a vision correction device and estimated user age information are stored in the storage. Therefore, a partial area of the storage constitutes a correction device mounting information storage unit.
映像表示装置10は、アプリケーションプログラムをストレージに格納しており、主制御部116がストレージから上記プログラムを読み出してRAM12に展開し、主制御部116が上記プログラムを実行することで各種機能を実現することができる。
The video display apparatus 10 stores an application program in a storage. The main control unit 116 reads the program from the storage and develops the program in the RAM 12, and the main control unit 116 executes the program to realize various functions. be able to.
なお、アプリケーションプログラムは、映像表示装置10が出荷されるまでに予めストレージに格納されていても良いし、CD(Compact Disk)・DVDなどの光学メディアや半導体メモリ等の媒体に格納されて媒体接続部(DVDドライブは媒体接続部の一態様である)を介して映像表示装置10にインストールされても良い。また、アプリケーションプログラムは、同様の機能を持つハードウェア(ICチップ等)で実現することも可能である。ハードウェアとして実現する場合は、各処理部が主体となり各機能を実現する。
The application program may be stored in the storage in advance before the video display device 10 is shipped, or may be stored in an optical medium such as a CD (Compact Disk) / DVD or a medium such as a semiconductor memory and connected to the medium. May be installed in the video display device 10 via a unit (a DVD drive is an aspect of a medium connection unit). The application program can also be realized by hardware (IC chip or the like) having the same function. When implemented as hardware, each processing unit takes the lead in realizing each function.
図3は映像表示装置10の外観図である。図3に示すように、映像表示装置10は、モニタ15に画像や映像を表示する。カメラ20は映像表示装置10と一体に構成されてもよいし別体に構成されてもよいが、映像表示装置10に無線又は有線で接続される。そしてモニタ15の画面を視聴するユーザの顔が撮像される画角にカメラ20は設置される。本実施形態ではカメラ20が撮像したユーザ画像を基にユーザと映像表示装置10との間の距離を測定(別体に構成される場合は推定)し、この距離に基づき自動的に画面上の文字や物体等のオブジェクトサイズを変更してモニタ15に表示する。ここでいうオブジェクトとは、第1実施形態から第3実施形態までは字幕やEPGなどの文字情報を指し、第4実施形態ではモニタ15の画面上の部分領域を指すものである。
FIG. 3 is an external view of the video display device 10. As shown in FIG. 3, the video display device 10 displays an image or video on the monitor 15. The camera 20 may be configured integrally with the video display device 10 or may be configured separately, but is connected to the video display device 10 wirelessly or by wire. The camera 20 is installed at an angle of view where the face of the user who views the screen of the monitor 15 is captured. In the present embodiment, the distance between the user and the video display device 10 is measured (estimated if configured separately) based on the user image captured by the camera 20, and automatically on the screen based on this distance. The object size such as characters and objects is changed and displayed on the monitor 15. The object here refers to character information such as subtitles and EPG from the first embodiment to the third embodiment, and refers to a partial area on the screen of the monitor 15 in the fourth embodiment.
<第1実施形態>
第1実施形態は、ユーザの視力を測定し、視力に応じた拡大率で文字を拡大表示する実施形態である。以下、図4~図8を参照して第1実施形態について説明する。図4は、第1実施形態に係る映像表示方法を示すフローチャートである。図5は指標パターン例を示す図である。図6は、距離、視力と拡大率との関係を示す図である。以下、図4の流れに沿って説明する。図7は、映像表示装置(テレビ)における文字拡大率決定処理のタイミングを示すフローチャートである。図8は本実施形態における画面表示例であって、(a)は原サイズによる画面を示し、(b)は拡大処理後の画面を示す。 <First Embodiment>
In the first embodiment, the user's visual acuity is measured, and characters are enlarged and displayed at an enlargement rate corresponding to the visual acuity. The first embodiment will be described below with reference to FIGS. FIG. 4 is a flowchart showing the video display method according to the first embodiment. FIG. 5 is a diagram illustrating an example of an index pattern. FIG. 6 is a diagram illustrating the relationship among distance, visual acuity, and magnification. Hereinafter, it demonstrates along the flow of FIG. FIG. 7 is a flowchart showing the timing of character enlargement rate determination processing in the video display device (television). FIGS. 8A and 8B are screen display examples according to the present embodiment. FIG. 8A shows a screen according to the original size, and FIG. 8B shows a screen after enlargement processing.
第1実施形態は、ユーザの視力を測定し、視力に応じた拡大率で文字を拡大表示する実施形態である。以下、図4~図8を参照して第1実施形態について説明する。図4は、第1実施形態に係る映像表示方法を示すフローチャートである。図5は指標パターン例を示す図である。図6は、距離、視力と拡大率との関係を示す図である。以下、図4の流れに沿って説明する。図7は、映像表示装置(テレビ)における文字拡大率決定処理のタイミングを示すフローチャートである。図8は本実施形態における画面表示例であって、(a)は原サイズによる画面を示し、(b)は拡大処理後の画面を示す。 <First Embodiment>
In the first embodiment, the user's visual acuity is measured, and characters are enlarged and displayed at an enlargement rate corresponding to the visual acuity. The first embodiment will be described below with reference to FIGS. FIG. 4 is a flowchart showing the video display method according to the first embodiment. FIG. 5 is a diagram illustrating an example of an index pattern. FIG. 6 is a diagram illustrating the relationship among distance, visual acuity, and magnification. Hereinafter, it demonstrates along the flow of FIG. FIG. 7 is a flowchart showing the timing of character enlargement rate determination processing in the video display device (television). FIGS. 8A and 8B are screen display examples according to the present embodiment. FIG. 8A shows a screen according to the original size, and FIG. 8B shows a screen after enlargement processing.
映像表示装置10の主電源が投入されると、視力測定部104はユーザの視力を測定するための指標パターン画像をモニタ15に表示する(S01)。指標パターンとは、例えば、点状指標、リング指標、スリット指標等である。図5は、スリット指標パターン121がモニタに表示された状態を示す。
When the main power of the video display device 10 is turned on, the visual acuity measurement unit 104 displays an index pattern image for measuring the visual acuity of the user on the monitor 15 (S01). The index pattern is, for example, a point index, a ring index, a slit index, or the like. FIG. 5 shows a state in which the slit index pattern 121 is displayed on the monitor.
カメラ20はユーザを撮像してユーザ画像を生成する(S02)。
The camera 20 captures the user and generates a user image (S02).
目領域抽出部103はユーザ画像からユーザの目が撮像された目領域を抽出する(S03)。目領域抽出部103は、抽出した目領域の白目及び黒目の形状を検出する。
The eye area extraction unit 103 extracts the eye area where the user's eyes are captured from the user image (S03). The eye area extraction unit 103 detects the shape of white eyes and black eyes of the extracted eye area.
抽出された目領域には指標パターンが写りこんでいる。そこで視力測定部104は、目領域に投影された指標パターン像を解析して、この解析結果を基に角膜形状の分布情報を取得し屈折率を演算することで、ユーザの視力を測定する(S04)。
The index pattern is reflected in the extracted eye area. Therefore, the visual acuity measurement unit 104 analyzes the index pattern image projected onto the eye region, acquires corneal shape distribution information based on the analysis result, and calculates the refractive index, thereby measuring the visual acuity of the user ( S04).
測距部101はユーザと映像表示装置10までの間の距離を測定する(S05)。本実施形態では、測距部101がカメラ20からの画像を基に距離を測定する。ユーザの目の位置と画面との距離は、ユーザの両目の距離と画面との距離に比例する。そこで測距部101は、抽出された眼領域を基に両目の距離を演算し、それを基に画面までの距離を演算する。
The ranging unit 101 measures the distance between the user and the video display device 10 (S05). In the present embodiment, the distance measuring unit 101 measures the distance based on the image from the camera 20. The distance between the user's eye position and the screen is proportional to the distance between the user's eyes and the screen. Therefore, the distance measuring unit 101 calculates the distance between both eyes based on the extracted eye region, and calculates the distance to the screen based on the calculated distance.
カメラ20がステレオカメラで有れば顔領域抽出部102がステレオカメラで撮像した顔画像に基づいて、測距部101がステレオカメラの視差を基に距離演算処理を実行する。
If the camera 20 is a stereo camera, the distance measurement unit 101 executes a distance calculation process based on the parallax of the stereo camera based on the face image captured by the face region extraction unit 102 with the stereo camera.
更に、カメラ20とは異なる測距装置、例えば超音波装置を映像表示装置10に接続し、測距部101が測距装置からの出力信号を基にユーザまでの距離を測定してもよい。
Further, a distance measuring device different from the camera 20, for example, an ultrasonic device may be connected to the video display device 10, and the distance measuring unit 101 may measure the distance to the user based on the output signal from the distance measuring device.
なお、複数のユーザがいる場合は、目領域抽出部103が抽出した各ユーザの眼領域を基に、測距部101が各ユーザまでの距離を測定し、映像表示装置10から最も遠い位置に入るユーザまでの距離を距離情報として出力するように構成してもよい。
If there are a plurality of users, the distance measuring unit 101 measures the distance to each user based on the eye area of each user extracted by the eye region extracting unit 103 and sets the farthest position from the video display device 10. You may comprise so that the distance to the user who enters may be output as distance information.
オブジェクト拡大処理部110は、距離情報及び視力情報に基づいて、モニタ15に表示するオブジェクトの拡大率を決定する(S06)。なお、第1~第3実施形態ではオブジェクトとして文字情報を例に挙げて説明するので、オブジェクト拡大率を文字拡大率ともいう。
The object enlargement processing unit 110 determines the enlargement ratio of the object displayed on the monitor 15 based on the distance information and the visual acuity information (S06). In the first to third embodiments, since character information is described as an example of an object, the object enlargement rate is also referred to as a character enlargement rate.
図6に示すように、オブジェクト拡大処理部110は、ユーザ及び映像表示装置10間の距離が遠い場合、ユーザが画面から離れていると判断して文字サイズを拡大して表示する。一方、距離が近い場合、ユーザが画面から離れていないと判断し文字サイズを縮小又は映像に含まれている原サイズのまま表示する。なお、オブジェクトを拡大表示した場合に、映像に含まれる文字が全て表示されるように画面サイズに基づく閾値を設定し、拡大率に上限を設けてもよい。画面サイズは、手動で予め映像表示装置10に入力してもよいし、広角レンズを用いてカメラ20に写る画面の大きさから自動で算出してもよい。
As shown in FIG. 6, when the distance between the user and the video display device 10 is far, the object enlargement processing unit 110 determines that the user is away from the screen and displays the enlarged character size. On the other hand, if the distance is short, it is determined that the user is not away from the screen, and the character size is reduced or displayed as the original size included in the video. Note that a threshold based on the screen size may be set so that all characters included in the video are displayed when the object is enlarged and an upper limit may be set for the enlargement ratio. The screen size may be manually input to the video display device 10 in advance, or may be automatically calculated from the size of the screen captured by the camera 20 using a wide-angle lens.
オブジェクトが字幕である場合、字幕の形状には縦型や横型、全画面型等がある。縦型や横型の場合は、表示画面のサイズに基づき、オブジェクトサイズの拡大率に上限を設けて表示してもよい。全画面型の場合は、EPGなどフォントサイズのみ拡大縮小できる場合に限り、オブジェクトサイズを変更してもよい。
When the object is subtitles, the shape of the subtitles includes vertical, horizontal and full screen types. In the case of the vertical type or the horizontal type, the object size may be displayed with an upper limit based on the size of the display screen. In the case of the full screen type, the object size may be changed only when the font size can be enlarged or reduced, such as EPG.
放送信号受信部113は、受信アンテナ31を介して放送信号を受信し、デコーダ112が放送信号の復号化処理を行い映像信号を生成する。オブジェクト認識部111は、映像信号に対して文字認識処理を実行し、オブジェクト拡大処理部110がステップS06で決定した拡大率に従って文字情報を拡大する。表示制御部114は文字が拡大された映像信号をモニタ15に表示する(S07)。
The broadcast signal receiving unit 113 receives a broadcast signal via the receiving antenna 31, and the decoder 112 decodes the broadcast signal to generate a video signal. The object recognition unit 111 performs character recognition processing on the video signal, and expands the character information according to the enlargement rate determined by the object enlargement processing unit 110 in step S06. The display control unit 114 displays the video signal with enlarged characters on the monitor 15 (S07).
また、EPG生成部115は、放送信号に含まれるEPG信号を受信するとEPGを生成する。入力装置16からEPGの表示操作が行われるとオブジェクト拡大処理部110はEPGに含まれる文字のフォントサイズをS06で決定された拡大率で拡大して表示制御部114に出力し、モニタ15に表示される。
Further, when the EPG generation unit 115 receives an EPG signal included in the broadcast signal, the EPG generation unit 115 generates an EPG. When an EPG display operation is performed from the input device 16, the object enlargement processing unit 110 enlarges the font size of the characters included in the EPG at the enlargement rate determined in S <b> 06, and outputs it to the display control unit 114 for display on the monitor 15. Is done.
更に、DVDドライブ108に字幕情報が付帯された動画信号が入力されると、字幕情報抽出部109は字幕情報をオブジェクト拡大処理部110に出力する。そして字幕の文字を拡大して表示する。なお、動画信号は表示制御部114に直接出力し、表示制御部114は動画と字幕とを合成する処理を実行してモニタ15に表示してもよい。
Further, when a moving image signal with subtitle information attached thereto is input to the DVD drive 108, the subtitle information extraction unit 109 outputs the subtitle information to the object enlargement processing unit 110. The subtitle characters are enlarged and displayed. Note that the moving image signal may be directly output to the display control unit 114, and the display control unit 114 may display the monitor 15 by executing a process of combining the moving image and the caption.
図7に示すように、テレビの電源をONにしてテレビの初期設定をする(S11/YES)。この際に文字拡大率決定処理(S01~S06)実行し(S12)、ユーザの識別情報(例えば目領域抽出部103がした白目と黒目の形状を用いてもよい)と測定した視力とを関連付けてストレージに記憶しておく。そして電源をOFFにする。
As shown in FIG. 7, the TV is turned on and the TV is initially set (S11 / YES). At this time, the character enlargement rate determination process (S01 to S06) is executed (S12), and the identification information of the user (for example, the shape of the white and black eyes performed by the eye region extraction unit 103 may be used) is associated with the measured visual acuity. And store it in the storage. Then, the power is turned off.
次にテレビを視聴するために電源をONした場合又は初期設定から引き続き視聴画面に遷移させた場合(S13/YES)に、文字拡大率決定処理・及び拡大表示処理を実行してもよい(S14)。
Next, when the power is turned on to view the television or when the screen is changed from the initial setting to the viewing screen (S13 / YES), the character enlargement rate determination process and the enlarged display process may be executed (S14). ).
ステップS14において、画面にテレビ番組を表示する際には、電源をオンにしてテレビ番組が表示されるまでの待機時間(通常はモニタが暗い間)に文字拡大率の決定処理を実行し、テレビ番組の表示が開始したときには文字が拡大して表示されるように構成してもよい。またチャンネルを変えた場合は、視聴するユーザが変わる可能性もあるので、チャンネルを変える操作を入力装置16が受け付けたことをトリガーとして再度ユーザ画像の撮像から視力測定、拡大率の決定処理が実行されるように構成してもよい。
In step S14, when a television program is displayed on the screen, the character enlargement ratio is determined during a standby time (usually when the monitor is dark) until the television program is displayed after the power is turned on. When the display of the program is started, the characters may be enlarged and displayed. When the channel is changed, there is a possibility that the viewing user may be changed. Therefore, the process of changing the channel and the input device 16 accepting the input device 16 as a trigger again performs the visual acuity measurement and the enlargement ratio determination process. You may comprise.
図8を参照して文字情報の拡大前と拡大後の比較例を示す。図8(a)に示すように、文字情報を拡大前の画面81では、文字情報82、83が原サイズ表示されており比較的小さい。これに対し図8(b)に示すように文字情報を拡大後の画面85では、文字情報86、87は拡大されているが文字列は画面85内に収まっている。
Referring to FIG. 8, a comparative example before and after the expansion of character information is shown. As shown in FIG. 8A, on the screen 81 before the character information is enlarged, the character information 82 and 83 are displayed in the original size and are relatively small. On the other hand, as shown in FIG. 8B, in the screen 85 after the character information is enlarged, the character information 86 and 87 are enlarged, but the character string is within the screen 85.
本実施形態によれば、ユーザの視力を映像表示装置が測定し、視力及び距離に応じた拡大率で文字を拡大表示することができる。
According to the present embodiment, the video display device can measure the user's visual acuity, and the character can be enlarged and displayed at an enlargement ratio corresponding to the visual acuity and the distance.
<第2実施形態>
第2実施形態は、視力を測定せずに顔画像から年齢を推定して拡大率を決定する実施形態である。図9は、第2実施形態に係る映像表示方法を示すフローチャートである。図10は、推定年齢と拡大率との対応関係を示す図である。 Second Embodiment
In the second embodiment, the magnification is determined by estimating the age from the face image without measuring the visual acuity. FIG. 9 is a flowchart showing a video display method according to the second embodiment. FIG. 10 is a diagram illustrating a correspondence relationship between the estimated age and the enlargement rate.
第2実施形態は、視力を測定せずに顔画像から年齢を推定して拡大率を決定する実施形態である。図9は、第2実施形態に係る映像表示方法を示すフローチャートである。図10は、推定年齢と拡大率との対応関係を示す図である。 Second Embodiment
In the second embodiment, the magnification is determined by estimating the age from the face image without measuring the visual acuity. FIG. 9 is a flowchart showing a video display method according to the second embodiment. FIG. 10 is a diagram illustrating a correspondence relationship between the estimated age and the enlargement rate.
カメラ20でユーザを撮像した後(S21)、顔領域抽出部102は撮影画像からユーザの顔が撮像された領域(顔領域)を抽出する(S22)。
After capturing the user with the camera 20 (S21), the face area extraction unit 102 extracts an area (face area) where the user's face is imaged from the captured image (S22).
年齢推定部105は顔領域内の目、鼻、口、髪、輪郭等の特徴を用いて年齢を推定し(S23)、オブジェクト拡大処理部110に出力する。
The age estimation unit 105 estimates the age using features such as eyes, nose, mouth, hair, and contour in the face area (S23), and outputs them to the object enlargement processing unit 110.
図10に距離・年齢と拡大率とを対応付けた拡大率データを示す。オブジェクト拡大処理部110は、拡大率データを参照し、距離及び推定年齢に応じた文字拡大率を決定し(S24)、決定した拡大率に応じて文字情報を拡大して表示する(S25)。なお図10の拡大率データは。画面サイズに基づく閾値に達するまでは拡大率を年齢に応じて連続増加関数で定義するが、推定年齢に年齢閾値を設け、年齢閾値未満では原サイズ表示を行い、年齢閾値以上では予め定めた拡大率で拡大表示を行うようにしてもよい。
FIG. 10 shows the enlargement ratio data in which the distance / age and the enlargement ratio are associated with each other. The object enlargement processing unit 110 refers to the enlargement rate data, determines the character enlargement rate according to the distance and the estimated age (S24), and enlarges and displays the character information according to the determined enlargement rate (S25). The enlargement ratio data in FIG. Until the threshold based on the screen size is reached, the enlargement rate is defined by a continuous increase function according to the age, but an age threshold is set for the estimated age, the original size is displayed below the age threshold, and a predetermined enlargement is shown above the age threshold You may make it perform an enlarged display at a rate.
本実施形態によれば、ユーザの推定年齢を演算し、その結果を用いて文字情報の拡大率を決定するので、特に高齢の視聴者に対して文字を拡大した見やすい画面を表示することができる。
According to the present embodiment, the estimated age of the user is calculated, and the enlargement ratio of the character information is determined using the result, so that an easy-to-see screen with enlarged characters can be displayed especially for elderly viewers. .
<第3実施形態>
第3実施形態は、眼鏡やコンタクトレンズの有無を基に文字の拡大表示を行う実施形態である。以下、図11を参照して第3実施形態について説明する。図11は、第3実施形態に係る映像表示方法を示すフローチャートである。 <Third Embodiment>
The third embodiment is an embodiment that performs enlarged display of characters based on the presence or absence of glasses or contact lenses. The third embodiment will be described below with reference to FIG. FIG. 11 is a flowchart showing a video display method according to the third embodiment.
第3実施形態は、眼鏡やコンタクトレンズの有無を基に文字の拡大表示を行う実施形態である。以下、図11を参照して第3実施形態について説明する。図11は、第3実施形態に係る映像表示方法を示すフローチャートである。 <Third Embodiment>
The third embodiment is an embodiment that performs enlarged display of characters based on the presence or absence of glasses or contact lenses. The third embodiment will be described below with reference to FIG. FIG. 11 is a flowchart showing a video display method according to the third embodiment.
日ごろからカメラ20で室内を撮像し、ユーザの顔を撮像して顔認識処理を実行し、ユーザ毎に眼鏡の装着習慣の有無を規定した学習データをストレージに蓄積しておく(S31)。
The room is imaged with the camera 20 from around the day, the face of the user is imaged and the face recognition process is executed, and learning data that defines the habit of wearing glasses for each user is accumulated in the storage (S31).
カメラ20で文字拡大表示の対象ユーザを撮像する(S32)。この処理は例えばテレビの主電源を投入する操作を受け付けたことをトリガーとして実行されてもよい。
The target user for character enlargement display is imaged by the camera 20 (S32). This process may be executed, for example, as a trigger when an operation to turn on the main power of the television is accepted.
顔領域抽出部102は、撮影画像からユーザの顔領域を抽出する(S33)。
The face area extraction unit 102 extracts the user's face area from the captured image (S33).
オブジェクト拡大処理部110は、顔領域と上記学習データを照合し、対象ユーザが眼鏡装着習慣がある人かを判断する。眼鏡の装着習慣がある人の場合(S34/Yes)、眼鏡検出部106は、顔領域に対して例えば眼鏡の形状や色を基に、顔領域内にフレームのような幾何学的な形状が写りこんでいるか、また人体とは異なる色彩が含まれているかを基に眼鏡検出処理を実行する。
The object enlargement processing unit 110 collates the face area with the learning data, and determines whether the target user is a person who has a habit of wearing glasses. In the case of a person who has a habit of wearing glasses (S34 / Yes), the glasses detection unit 106 has a geometric shape such as a frame in the face area based on the shape and color of the glasses, for example. The eyeglasses detection process is executed based on whether the image is captured or whether a color different from the human body is included.
眼鏡を装着していない場合(S35/NO)は、コンタクトレンズ検出部107がコンタクトレンズを装着しているかを判断する。目領域抽出部103は目領域を抽出し、コンタクトレンズ検出部107が目領域内に例えば幾何学的な円形形状(コンタクトレンズの形状に相当する)を検出できた場合はコンタクトレンズを装着していると判断する(S36/YES)。
If the eyeglasses are not worn (S35 / NO), the contact lens detection unit 107 determines whether the contact lenses are worn. The eye region extraction unit 103 extracts the eye region, and if the contact lens detection unit 107 can detect, for example, a geometric circular shape (corresponding to the shape of the contact lens) in the eye region, a contact lens is attached. (S36 / YES).
眼鏡の装着習慣がある者が眼鏡もコンタクトレンズも装着していない場合は(S36/NO)、予め定められた拡大率α(α>1)を用いてオブジェクト拡大処理部110が文字を拡大し、表示制御部114がモニタ15に表示する(S37)。
If a person who wears spectacles does not wear spectacles or contact lenses (S36 / NO), the object enlargement processing unit 110 enlarges the character using a predetermined enlargement ratio α (α> 1). The display control unit 114 displays on the monitor 15 (S37).
眼鏡の装着習慣がない者(S34/NO)、眼鏡を装着しているか(S35/YES)、コンタクトレンズを装着している場合(S36/YES)、オブジェクト拡大処理部110は文字を拡大せずに、原サイズ(等倍)で表示する。
If the person does not have the habit of wearing glasses (S34 / NO), is wearing glasses (S35 / YES), or is wearing a contact lens (S36 / YES), the object enlargement processing unit 110 does not enlarge the characters. Display in the original size (same size).
本実施形態によれば、普段は眼鏡を装着し、まれに眼鏡を装着していない場合でも画面のオブジェクトを視認することができる。
According to the present embodiment, it is possible to visually recognize the object on the screen even when the glasses are usually worn and the glasses are rarely worn.
<第4実施形態>
第4実施形態は、ユーザの視線を検出して、ユーザが視認している画面内の部分領域を拡大表示する実施形態である。図12は、第4実施形態に係る映像表示方法を示すフローチャートである。図13は、第4実施形態で表示される画面表示例であって、(a)は原サイズの画面を示し、(b)は拡大表示処理の画面を示す。 <Fourth embodiment>
The fourth embodiment is an embodiment in which a user's line of sight is detected and a partial area in the screen that the user is viewing is enlarged and displayed. FIG. 12 is a flowchart showing a video display method according to the fourth embodiment. FIGS. 13A and 13B are screen display examples displayed in the fourth embodiment, where FIG. 13A shows an original size screen and FIG. 13B shows an enlarged display processing screen.
第4実施形態は、ユーザの視線を検出して、ユーザが視認している画面内の部分領域を拡大表示する実施形態である。図12は、第4実施形態に係る映像表示方法を示すフローチャートである。図13は、第4実施形態で表示される画面表示例であって、(a)は原サイズの画面を示し、(b)は拡大表示処理の画面を示す。 <Fourth embodiment>
The fourth embodiment is an embodiment in which a user's line of sight is detected and a partial area in the screen that the user is viewing is enlarged and displayed. FIG. 12 is a flowchart showing a video display method according to the fourth embodiment. FIGS. 13A and 13B are screen display examples displayed in the fourth embodiment, where FIG. 13A shows an original size screen and FIG. 13B shows an enlarged display processing screen.
カメラ20でユーザを撮像し(S41)、目領域抽出部103が撮像画像内の目領域を抽出し、黒目の位置からユーザの視線方向を演算する(S42)。なお、本実施形態では、カメラ20と映像表示装置10との位置関係は固定であるとする。
The user is imaged by the camera 20 (S41), the eye area extraction unit 103 extracts the eye area in the captured image, and calculates the user's line-of-sight direction from the position of the black eye (S42). In the present embodiment, it is assumed that the positional relationship between the camera 20 and the video display device 10 is fixed.
オブジェクト認識部111は、ユーザの視線方向先にあるモニタ15の領域を特定することで、ユーザが視認している画面内の部分領域を演算する(S43)。
The object recognizing unit 111 calculates the partial area in the screen that the user is viewing by specifying the area of the monitor 15 that is ahead of the user's line of sight (S43).
オブジェクト拡大処理部110は、オブジェクトの拡大率を決定する(S44)。本ステップでのオブジェクトの拡大率は、上記第1実施形態から第3実施形態では文字情報拡大率として説明した拡大率と同じである。よって、視力の測定結果や年齢、視力矯正装置の装着の有無に基づいた拡大率のうち、いずれを用いてもよい。
The object enlargement processing unit 110 determines an object enlargement rate (S44). The enlargement ratio of the object in this step is the same as the enlargement ratio described as the character information enlargement ratio in the first to third embodiments. Therefore, any of the magnification rate based on the measurement result and age of the visual acuity and whether or not the visual acuity correction device is attached may be used.
オブジェクト拡大処理部110は、表示対象となる映像信号に含まれる部分領域を決定された拡大率に応じて拡大表示する(S45)。
The object enlargement processing unit 110 enlarges and displays the partial area included in the video signal to be displayed according to the determined enlargement ratio (S45).
本実施形態では、図13に示すように、モニタ15の画面においてユーザが視認している部分領域A1(図13(a)参照)が、拡大表示処理により他の領域よりも大きく拡大表示される(図13(b)参照)。
In the present embodiment, as shown in FIG. 13, the partial area A1 (see FIG. 13A) that is visually recognized by the user on the screen of the monitor 15 is enlarged and displayed larger than the other areas by the enlargement display process. (See FIG. 13B).
上記各実施形態は本発明の一態様を示したに過ぎず、本発明には様々な変形例が含まれる。上記した実施形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることも可能である。また、各実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。
The above embodiments merely show one aspect of the present invention, and the present invention includes various modifications. The above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described. Further, a part of the configuration of an embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of an embodiment. In addition, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
例えばオブジェクト拡大処理部110は、文字の大きさだけでなく、背景のコントラストを変更することにより、文字を見えやすく制御しても良い。コントラストを変更することにより、オブジェクトサイズの拡大率が上限値のため、拡大できない場合においても、更に文字が見えやすくなる。
For example, the object enlargement processing unit 110 may control the character so that it can be easily seen by changing not only the size of the character but also the contrast of the background. By changing the contrast, since the enlargement ratio of the object size is the upper limit value, even when the enlargement cannot be performed, the characters can be seen more easily.
また、カーナビやプロジェクター等にも本実施形態を適用し、画面のオブジェクトサイズの変更を行なっても良い。
In addition, the object size of the screen may be changed by applying this embodiment to a car navigation system, a projector, or the like.
また、上記の各構成は、それらの一部又は全部が、ハードウェアで構成されても、プロセッサでプログラムが実行されることにより実現されるように構成されてもよい。また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。
In addition, each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor. Further, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown.
10:映像表示装置、15:モニタ、20:カメラ
10: video display device, 15: monitor, 20: camera
Claims (7)
- 映像表示装置であって、
ユーザを撮像したユーザ画像に基づいてユーザの視力情報を取得する視力情報取得部と、
前記ユーザの視力情報に基づいて、表示する映像信号に含まれるオブジェクトを拡大するオブジェクト拡大処理部と、
前記映像信号の表示制御を行う表示制御部と、
を備えることを特徴とする映像表示装置。 A video display device,
A visual information acquisition unit that acquires visual information of the user based on a user image obtained by imaging the user;
An object enlargement processing unit for enlarging an object included in the video signal to be displayed based on the user's visual acuity information;
A display control unit that performs display control of the video signal;
A video display device comprising: - 請求項1に記載の映像表示装置において、
前記視力情報取得部は、
前記ユーザ画像において前記ユーザの目が撮像された目領域を抽出する目領域抽出部と、
前記ユーザの視力を測定する指標パターンを表示し、前記目領域に映り込んだ指標パターンの解析結果を基に前記ユーザの視力を測定する視力測定部と、を含む、
ことを特徴とする映像表示装置。 The video display device according to claim 1,
The vision information acquisition unit
An eye region extraction unit that extracts an eye region in which the user's eyes are captured in the user image;
A visual acuity measuring unit that displays an index pattern for measuring the user's visual acuity and measures the visual acuity of the user based on an analysis result of the index pattern reflected in the eye area;
A video display device characterized by that. - 請求項1に記載の映像表示装置であって、
前記ユーザ画像からユーザの顔が撮像された顔領域を抽出する顔領域抽出部を更に備え、
前記視力情報取得部は、前記顔領域の特徴量を基に前記ユーザ画像に撮像されたユーザの推定年齢を決定する年齢推定部を含み、
前記オブジェクト拡大処理部は、前記推定年齢に応じて前記オブジェクトを拡大表示する、
ことを特徴とする映像表示装置。 The video display device according to claim 1,
A face area extracting unit that extracts a face area in which a user's face is captured from the user image;
The visual acuity information acquisition unit includes an age estimation unit that determines an estimated age of a user imaged in the user image based on a feature amount of the face region,
The object enlargement processing unit enlarges and displays the object according to the estimated age.
A video display device characterized by that. - 請求項1に記載の映像表示装置であって、
前記視力情報取得部は、前記ユーザが視力矯正装置を装着する者であるか否かを示す矯正装置装着情報を記憶する矯正装置装着情報記憶部と、
前記ユーザ画像に視力矯正装置が撮像されているかを検出する矯正装置検出部と、を含み、
前記オブジェクト拡大処理部は、前記視力矯正装置を装着するユーザのユーザ画像から視力矯正装置が検出できない場合に、前記オブジェクトを拡大表示する、
ことを特徴とする映像表示装置。 The video display device according to claim 1,
The visual acuity information acquisition unit is a correction device mounting information storage unit that stores correction device mounting information indicating whether or not the user is a person who wears a visual correction device;
A correction device detection unit that detects whether a vision correction device is imaged in the user image,
The object enlargement processing unit enlarges and displays the object when the vision correction device cannot be detected from a user image of a user wearing the vision correction device.
A video display device characterized by that. - 請求項1に記載の映像表示装置であって、
前記ユーザと前記映像表示装置との間の距離を演算する測距部を更に備え、
前記オブジェクト拡大処理部は、前記オブジェクトが表示画面に収まる拡大率の範囲内において、前記ユーザと前記映像表示装置との間の距離が遠いほど前記オブジェクトの拡大率を大きくする、
ことを特徴とする映像表示装置。 The video display device according to claim 1,
A distance measuring unit for calculating a distance between the user and the video display device;
The object enlargement processing unit increases the enlargement ratio of the object as the distance between the user and the video display device increases within a range of an enlargement ratio where the object fits on a display screen.
A video display device characterized by that. - 請求項1に記載の映像表示装置であって、
前記オブジェクト拡大処理部は、表示対象となる映像信号に含まれる文字情報、及び前記映像信号の部分領域を拡大する、
ことを特徴とする映像表示装置。 The video display device according to claim 1,
The object enlargement processing unit enlarges character information included in a video signal to be displayed, and a partial area of the video signal.
A video display device characterized by that. - 映像表示装置に映像を表示する映像表示方法であって、
ユーザを撮像したユーザ画像に基づいてユーザの視力情報を取得するステップと、
前記ユーザの視力情報に基づいて、表示する映像信号に含まれるオブジェクトを拡大するステップと、
前記オブジェクトが拡大された映像信号を表示するステップと、
を含むことを特徴とする映像表示方法。 A video display method for displaying video on a video display device,
Acquiring the user's visual acuity information based on a user image obtained by imaging the user;
Magnifying an object included in a video signal to be displayed based on the user's visual acuity information;
Displaying an enlarged video signal of the object;
A video display method comprising:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/070166 WO2018008128A1 (en) | 2016-07-07 | 2016-07-07 | Video display device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/070166 WO2018008128A1 (en) | 2016-07-07 | 2016-07-07 | Video display device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018008128A1 true WO2018008128A1 (en) | 2018-01-11 |
Family
ID=60912453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/070166 WO2018008128A1 (en) | 2016-07-07 | 2016-07-07 | Video display device and method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018008128A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020080046A (en) * | 2018-11-13 | 2020-05-28 | シャープ株式会社 | Electronic apparatus, control device, and control method of electronic apparatus |
JP2022537628A (en) * | 2019-05-02 | 2022-08-29 | アー・ファウ・アー ライフサイエンス ゲー・エム・ベー・ハー | biological binding molecule |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004298461A (en) * | 2003-03-31 | 2004-10-28 | Topcon Corp | Refraction measuring apparatus |
JP2006023953A (en) * | 2004-07-07 | 2006-01-26 | Fuji Photo Film Co Ltd | Information display system |
JP2013510613A (en) * | 2009-11-13 | 2013-03-28 | エシロル アンテルナショナル(コンパーニュ ジェネラル ドプテーク) | Method and apparatus for automatically measuring at least one refractive property of a person's eyes |
JP2013109687A (en) * | 2011-11-24 | 2013-06-06 | Kyocera Corp | Portable terminal device, program and display control method |
JP2014044369A (en) * | 2012-08-28 | 2014-03-13 | Ricoh Co Ltd | Image display device |
JP2015103991A (en) * | 2013-11-26 | 2015-06-04 | パナソニックIpマネジメント株式会社 | Image processing apparatus, method and computer program |
-
2016
- 2016-07-07 WO PCT/JP2016/070166 patent/WO2018008128A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004298461A (en) * | 2003-03-31 | 2004-10-28 | Topcon Corp | Refraction measuring apparatus |
JP2006023953A (en) * | 2004-07-07 | 2006-01-26 | Fuji Photo Film Co Ltd | Information display system |
JP2013510613A (en) * | 2009-11-13 | 2013-03-28 | エシロル アンテルナショナル(コンパーニュ ジェネラル ドプテーク) | Method and apparatus for automatically measuring at least one refractive property of a person's eyes |
JP2013109687A (en) * | 2011-11-24 | 2013-06-06 | Kyocera Corp | Portable terminal device, program and display control method |
JP2014044369A (en) * | 2012-08-28 | 2014-03-13 | Ricoh Co Ltd | Image display device |
JP2015103991A (en) * | 2013-11-26 | 2015-06-04 | パナソニックIpマネジメント株式会社 | Image processing apparatus, method and computer program |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020080046A (en) * | 2018-11-13 | 2020-05-28 | シャープ株式会社 | Electronic apparatus, control device, and control method of electronic apparatus |
JP2022537628A (en) * | 2019-05-02 | 2022-08-29 | アー・ファウ・アー ライフサイエンス ゲー・エム・ベー・ハー | biological binding molecule |
JP7529690B2 (en) | 2019-05-02 | 2024-08-06 | アー・ファウ・アー ライフサイエンス ゲー・エム・ベー・ハー | Biological binding molecules |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8441435B2 (en) | Image processing apparatus, image processing method, program, and recording medium | |
KR102446442B1 (en) | Digital photographing apparatus and the operating method for the same | |
JP6886117B2 (en) | How to control the image quality of the image displayed on one display device | |
US10609273B2 (en) | Image pickup device and method of tracking subject thereof | |
TWI439120B (en) | Display device | |
US20120133754A1 (en) | Gaze tracking system and method for controlling internet protocol tv at a distance | |
CN101301236B (en) | Eyesight protection system based on three-dimensional camera shooting and method | |
US20150287244A1 (en) | Eyepiece-type display apparatus and display control method therefor | |
US20150317956A1 (en) | Head mounted display utilizing compressed imagery in the visual periphery | |
JP6341755B2 (en) | Information processing apparatus, method, program, and recording medium | |
US20170372679A1 (en) | Mobile Terminal for Automatically Adjusting a Text Size and a Method Thereof | |
JP2013533672A (en) | 3D image processing | |
EP3316568B1 (en) | Digital photographing device and operation method therefor | |
CN107037584B (en) | Intelligent glasses perspective method and system | |
US20180218714A1 (en) | Image display apparatus, image processing apparatus, image display method, image processing method, and storage medium | |
KR20170011362A (en) | Imaging apparatus and method for the same | |
WO2019021601A1 (en) | Information processing device, information processing method, and program | |
JP2021105694A (en) | Imaging apparatus and method for controlling the same | |
CN107613405B (en) | VR video subtitle display method and device | |
JP2013148599A (en) | Display device | |
CN110602475B (en) | Method and device for improving image quality, VR display equipment and control method | |
WO2018008128A1 (en) | Video display device and method | |
EP3561570A1 (en) | Head-mounted display apparatus, and visual-aid providing method thereof | |
JP5121999B1 (en) | Position coordinate detection apparatus, position coordinate detection method, and electronic apparatus | |
US20220329740A1 (en) | Electronic apparatus, method for controlling electronic apparatus, and non-transitory computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16908172 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16908172 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |