WO2018008128A1 - Dispositif et procédé d'affichage vidéo - Google Patents

Dispositif et procédé d'affichage vidéo Download PDF

Info

Publication number
WO2018008128A1
WO2018008128A1 PCT/JP2016/070166 JP2016070166W WO2018008128A1 WO 2018008128 A1 WO2018008128 A1 WO 2018008128A1 JP 2016070166 W JP2016070166 W JP 2016070166W WO 2018008128 A1 WO2018008128 A1 WO 2018008128A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
video display
display device
unit
visual acuity
Prior art date
Application number
PCT/JP2016/070166
Other languages
English (en)
Japanese (ja)
Inventor
佑人 小松
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2016/070166 priority Critical patent/WO2018008128A1/fr
Publication of WO2018008128A1 publication Critical patent/WO2018008128A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory

Definitions

  • the present invention relates to a video display apparatus and method, and more particularly to a technology for improving screen visibility.
  • Patent Document 1 discloses that “information is displayed on the display screen in a desired orientation according to the positional relationship between the display screen of the mobile terminal device and the user without newly providing a sensor that prevents downsizing of the mobile terminal device. ”
  • a portable terminal device capable of displaying an image and for this purpose, “a main body having a display portion capable of displaying at least character information, and a camera provided on the main body and photographing the periphery of the main body”
  • a main control unit that acquires information on a user's face based on an image captured by the camera unit and grasps at least a relative positional relationship between the orientation of the face and the orientation of the main body.
  • the main control unit determines the direction of information to be displayed on the display screen of the display unit according to the grasped positional relationship, and controls the display control unit so that the information is displayed on the display unit according to the orientation (summary extract). " Mobile terminal device opened It is.
  • Patent Document 1 Although the direction of information to be displayed can be changed in accordance with the direction of the face, there is a problem that the visibility when the characters on the screen are too small for the user is not improved.
  • the present invention has been made in view of the above problems, and an object of the present invention is to improve the visibility of a screen according to a user's visual acuity.
  • the present invention is a video display device that displays a visual acuity information acquisition unit that acquires visual acuity information of a user based on a user image obtained by imaging the user, and the visual acuity information of the user.
  • An object enlargement processing unit that enlarges an object included in the video signal, and a display control unit that performs display control of the video signal.
  • Figure showing an example of an indicator pattern Diagram showing the relationship between distance, visual acuity and magnification The flowchart which shows the timing of the character expansion rate determination process in a video display apparatus (television) It is the example of a screen display in 1st Embodiment, Comprising: (a) shows the screen by original size, (b) shows the example of a screen after an expansion process.
  • video display method which concerns on 3rd Embodiment The flowchart which shows the video display method which concerns on 4th Embodiment. It is a screen display example displayed in 4th Embodiment, Comprising: (a) shows the display screen of original size, (b) shows the screen of an enlarged display process.
  • FIG. 1 is a diagram illustrating a hardware configuration of the video display apparatus.
  • the video display device 10 is connected to a camera 20 that images a user, and a user image captured by the camera 20 is input to the video display device 10.
  • the video display device 10 may be a portable video terminal device such as a projector or a smartphone as long as it is a device that can display a video signal.
  • a television is taken as an example. Therefore, in this embodiment, it is connected to the receiving antenna 31 that receives a digital broadcast wave. Instead of a receiving antenna, a broadcast signal may be received from a cable television. Further, it is configured to be electrically connectable to a video playback device 32 such as a DVD recorder.
  • a video display device 10 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, a HDD (Hard Disk Drive) 14, a monitor 15, and an input device. 16 (including operation buttons and remote control receiver), I / F 17, and bus 18.
  • the CPU 11, RAM 12, ROM 13, HDD 14, monitor 15, input device 16, and I / F 17 are connected to each other via a bus 18.
  • the ROM 13 and the HDD 14 are examples of storage.
  • the storage may be built in the video display device 10 or may be a portable memory that can be attached to and detached from the video display device 10.
  • the camera 20, the receiving antenna 31, and the video playback device 32 are connected to the video display device 10 via the I / F 17.
  • the receiving antenna 31 and the video reproduction device 32 correspond to the input destination of the video signal to the video display device 10.
  • FIG. 2 is a functional block diagram showing functions of the video display device 10.
  • the video display device 10 includes a distance measuring unit 101, a face region extracting unit 102, an eye region extracting unit 103, a visual acuity measuring unit 104, an age estimating unit 105, a spectacles detecting unit 106, a contact lens detecting unit 107, and a DVD (Digital Versatile Disk).
  • Drive 108 caption information extraction unit 109, object enlargement processing unit 110, object recognition unit 111, decoder 112, broadcast signal reception unit 113, display control unit 114, EPG (Electronic Program Guide) generation unit 115, and main control unit 116
  • An application program as a component is provided.
  • the visual acuity measuring unit 104, the age estimating unit 105, the eyeglass detecting unit 106, and the contact lens detecting unit 107 are information indicating the actual measured value of the user's visual acuity or information for estimating visual acuity (the information on the wearing of the visual acuity correction device and estimated age information). Output), which corresponds to the vision information acquisition unit.
  • the eyeglass detection unit 106 and the contact lens detection unit 107 correspond to a correction device detection unit because they detect whether or not a vision correction device (including glasses and contact lenses) is attached.
  • Correction device wearing information indicating whether or not the user is wearing a vision correction device and estimated user age information are stored in the storage. Therefore, a partial area of the storage constitutes a correction device mounting information storage unit.
  • the video display apparatus 10 stores an application program in a storage.
  • the main control unit 116 reads the program from the storage and develops the program in the RAM 12, and the main control unit 116 executes the program to realize various functions. be able to.
  • the application program may be stored in the storage in advance before the video display device 10 is shipped, or may be stored in an optical medium such as a CD (Compact Disk) / DVD or a medium such as a semiconductor memory and connected to the medium. May be installed in the video display device 10 via a unit (a DVD drive is an aspect of a medium connection unit).
  • the application program can also be realized by hardware (IC chip or the like) having the same function. When implemented as hardware, each processing unit takes the lead in realizing each function.
  • FIG. 3 is an external view of the video display device 10.
  • the video display device 10 displays an image or video on the monitor 15.
  • the camera 20 may be configured integrally with the video display device 10 or may be configured separately, but is connected to the video display device 10 wirelessly or by wire.
  • the camera 20 is installed at an angle of view where the face of the user who views the screen of the monitor 15 is captured.
  • the distance between the user and the video display device 10 is measured (estimated if configured separately) based on the user image captured by the camera 20, and automatically on the screen based on this distance.
  • the object size such as characters and objects is changed and displayed on the monitor 15.
  • the object here refers to character information such as subtitles and EPG from the first embodiment to the third embodiment, and refers to a partial area on the screen of the monitor 15 in the fourth embodiment.
  • FIG. 4 is a flowchart showing the video display method according to the first embodiment.
  • FIG. 5 is a diagram illustrating an example of an index pattern.
  • FIG. 6 is a diagram illustrating the relationship among distance, visual acuity, and magnification.
  • FIG. 7 is a flowchart showing the timing of character enlargement rate determination processing in the video display device (television).
  • FIGS. 8A and 8B are screen display examples according to the present embodiment.
  • FIG. 8A shows a screen according to the original size
  • FIG. 8B shows a screen after enlargement processing.
  • the visual acuity measurement unit 104 displays an index pattern image for measuring the visual acuity of the user on the monitor 15 (S01).
  • the index pattern is, for example, a point index, a ring index, a slit index, or the like.
  • FIG. 5 shows a state in which the slit index pattern 121 is displayed on the monitor.
  • the camera 20 captures the user and generates a user image (S02).
  • the eye area extraction unit 103 extracts the eye area where the user's eyes are captured from the user image (S03).
  • the eye area extraction unit 103 detects the shape of white eyes and black eyes of the extracted eye area.
  • the index pattern is reflected in the extracted eye area. Therefore, the visual acuity measurement unit 104 analyzes the index pattern image projected onto the eye region, acquires corneal shape distribution information based on the analysis result, and calculates the refractive index, thereby measuring the visual acuity of the user ( S04).
  • the ranging unit 101 measures the distance between the user and the video display device 10 (S05). In the present embodiment, the distance measuring unit 101 measures the distance based on the image from the camera 20. The distance between the user's eye position and the screen is proportional to the distance between the user's eyes and the screen. Therefore, the distance measuring unit 101 calculates the distance between both eyes based on the extracted eye region, and calculates the distance to the screen based on the calculated distance.
  • the distance measurement unit 101 executes a distance calculation process based on the parallax of the stereo camera based on the face image captured by the face region extraction unit 102 with the stereo camera.
  • a distance measuring device different from the camera 20 for example, an ultrasonic device may be connected to the video display device 10, and the distance measuring unit 101 may measure the distance to the user based on the output signal from the distance measuring device.
  • the distance measuring unit 101 measures the distance to each user based on the eye area of each user extracted by the eye region extracting unit 103 and sets the farthest position from the video display device 10. You may comprise so that the distance to the user who enters may be output as distance information.
  • the object enlargement processing unit 110 determines the enlargement ratio of the object displayed on the monitor 15 based on the distance information and the visual acuity information (S06).
  • the object enlargement rate is also referred to as a character enlargement rate.
  • the object enlargement processing unit 110 determines that the user is away from the screen and displays the enlarged character size.
  • the distance is short, it is determined that the user is not away from the screen, and the character size is reduced or displayed as the original size included in the video.
  • a threshold based on the screen size may be set so that all characters included in the video are displayed when the object is enlarged and an upper limit may be set for the enlargement ratio.
  • the screen size may be manually input to the video display device 10 in advance, or may be automatically calculated from the size of the screen captured by the camera 20 using a wide-angle lens.
  • the shape of the subtitles includes vertical, horizontal and full screen types.
  • the object size may be displayed with an upper limit based on the size of the display screen.
  • the object size may be changed only when the font size can be enlarged or reduced, such as EPG.
  • the broadcast signal receiving unit 113 receives a broadcast signal via the receiving antenna 31, and the decoder 112 decodes the broadcast signal to generate a video signal.
  • the object recognition unit 111 performs character recognition processing on the video signal, and expands the character information according to the enlargement rate determined by the object enlargement processing unit 110 in step S06.
  • the display control unit 114 displays the video signal with enlarged characters on the monitor 15 (S07).
  • the EPG generation unit 115 when the EPG generation unit 115 receives an EPG signal included in the broadcast signal, the EPG generation unit 115 generates an EPG.
  • the object enlargement processing unit 110 enlarges the font size of the characters included in the EPG at the enlargement rate determined in S ⁇ b> 06, and outputs it to the display control unit 114 for display on the monitor 15. Is done.
  • the subtitle information extraction unit 109 outputs the subtitle information to the object enlargement processing unit 110.
  • the subtitle characters are enlarged and displayed.
  • the moving image signal may be directly output to the display control unit 114, and the display control unit 114 may display the monitor 15 by executing a process of combining the moving image and the caption.
  • the TV is turned on and the TV is initially set (S11 / YES).
  • the character enlargement rate determination process (S01 to S06) is executed (S12), and the identification information of the user (for example, the shape of the white and black eyes performed by the eye region extraction unit 103 may be used) is associated with the measured visual acuity. And store it in the storage. Then, the power is turned off.
  • the character enlargement rate determination process and the enlarged display process may be executed (S14). ).
  • step S14 when a television program is displayed on the screen, the character enlargement ratio is determined during a standby time (usually when the monitor is dark) until the television program is displayed after the power is turned on.
  • the display of the program is started, the characters may be enlarged and displayed.
  • the process of changing the channel and the input device 16 accepting the input device 16 as a trigger again performs the visual acuity measurement and the enlargement ratio determination process. You may comprise.
  • FIG. 8 a comparative example before and after the expansion of character information is shown.
  • the character information 82 and 83 are displayed in the original size and are relatively small.
  • the character information 86 and 87 are enlarged, but the character string is within the screen 85.
  • the video display device can measure the user's visual acuity, and the character can be enlarged and displayed at an enlargement ratio corresponding to the visual acuity and the distance.
  • the magnification is determined by estimating the age from the face image without measuring the visual acuity.
  • FIG. 9 is a flowchart showing a video display method according to the second embodiment.
  • FIG. 10 is a diagram illustrating a correspondence relationship between the estimated age and the enlargement rate.
  • the face area extraction unit 102 After capturing the user with the camera 20 (S21), the face area extraction unit 102 extracts an area (face area) where the user's face is imaged from the captured image (S22).
  • the age estimation unit 105 estimates the age using features such as eyes, nose, mouth, hair, and contour in the face area (S23), and outputs them to the object enlargement processing unit 110.
  • FIG. 10 shows the enlargement ratio data in which the distance / age and the enlargement ratio are associated with each other.
  • the object enlargement processing unit 110 refers to the enlargement rate data, determines the character enlargement rate according to the distance and the estimated age (S24), and enlarges and displays the character information according to the determined enlargement rate (S25).
  • the enlargement ratio data in FIG. Until the threshold based on the screen size is reached, the enlargement rate is defined by a continuous increase function according to the age, but an age threshold is set for the estimated age, the original size is displayed below the age threshold, and a predetermined enlargement is shown above the age threshold You may make it perform an enlarged display at a rate.
  • the estimated age of the user is calculated, and the enlargement ratio of the character information is determined using the result, so that an easy-to-see screen with enlarged characters can be displayed especially for elderly viewers. .
  • the third embodiment is an embodiment that performs enlarged display of characters based on the presence or absence of glasses or contact lenses.
  • the third embodiment will be described below with reference to FIG.
  • FIG. 11 is a flowchart showing a video display method according to the third embodiment.
  • the room is imaged with the camera 20 from around the day, the face of the user is imaged and the face recognition process is executed, and learning data that defines the habit of wearing glasses for each user is accumulated in the storage (S31).
  • the target user for character enlargement display is imaged by the camera 20 (S32). This process may be executed, for example, as a trigger when an operation to turn on the main power of the television is accepted.
  • the face area extraction unit 102 extracts the user's face area from the captured image (S33).
  • the object enlargement processing unit 110 collates the face area with the learning data, and determines whether the target user is a person who has a habit of wearing glasses.
  • the glasses detection unit 106 has a geometric shape such as a frame in the face area based on the shape and color of the glasses, for example.
  • the eyeglasses detection process is executed based on whether the image is captured or whether a color different from the human body is included.
  • the contact lens detection unit 107 determines whether the contact lenses are worn.
  • the eye region extraction unit 103 extracts the eye region, and if the contact lens detection unit 107 can detect, for example, a geometric circular shape (corresponding to the shape of the contact lens) in the eye region, a contact lens is attached. (S36 / YES).
  • the object enlargement processing unit 110 enlarges the character using a predetermined enlargement ratio ⁇ ( ⁇ > 1).
  • the display control unit 114 displays on the monitor 15 (S37).
  • the object enlargement processing unit 110 does not enlarge the characters. Display in the original size (same size).
  • the present embodiment it is possible to visually recognize the object on the screen even when the glasses are usually worn and the glasses are rarely worn.
  • the fourth embodiment is an embodiment in which a user's line of sight is detected and a partial area in the screen that the user is viewing is enlarged and displayed.
  • FIG. 12 is a flowchart showing a video display method according to the fourth embodiment.
  • FIGS. 13A and 13B are screen display examples displayed in the fourth embodiment, where FIG. 13A shows an original size screen and FIG. 13B shows an enlarged display processing screen.
  • the user is imaged by the camera 20 (S41), the eye area extraction unit 103 extracts the eye area in the captured image, and calculates the user's line-of-sight direction from the position of the black eye (S42).
  • the positional relationship between the camera 20 and the video display device 10 is fixed.
  • the object recognizing unit 111 calculates the partial area in the screen that the user is viewing by specifying the area of the monitor 15 that is ahead of the user's line of sight (S43).
  • the object enlargement processing unit 110 determines an object enlargement rate (S44).
  • the enlargement ratio of the object in this step is the same as the enlargement ratio described as the character information enlargement ratio in the first to third embodiments. Therefore, any of the magnification rate based on the measurement result and age of the visual acuity and whether or not the visual acuity correction device is attached may be used.
  • the object enlargement processing unit 110 enlarges and displays the partial area included in the video signal to be displayed according to the determined enlargement ratio (S45).
  • the partial area A1 (see FIG. 13A) that is visually recognized by the user on the screen of the monitor 15 is enlarged and displayed larger than the other areas by the enlargement display process. (See FIG. 13B).
  • the object enlargement processing unit 110 may control the character so that it can be easily seen by changing not only the size of the character but also the contrast of the background.
  • the contrast since the enlargement ratio of the object size is the upper limit value, even when the enlargement cannot be performed, the characters can be seen more easily.
  • the object size of the screen may be changed by applying this embodiment to a car navigation system, a projector, or the like.
  • each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown.

Abstract

L'invention concerne un dispositif d'affichage vidéo pouvant améliorer la visibilité sur un écran en fonction de la capacité visuelle de l'utilisateur. A cet effet, le dispositif d'affichage vidéo 10 comprend : une unité d'acquisition d'informations de capacité visuelle 104 pour acquérir les informations de capacité visuelle d'un utilisateur sur la base d'une image d'utilisateur obtenue par capture d'une image de l'utilisateur ; une unité d'agrandissement d'objet 110 pour agrandir un objet inclus dans un signal vidéo devant être affiché sur la base des informations de capacité visuelle de l'utilisateur ; une unité de commande d'affichage 114 pour commander l'affichage du signal vidéo.
PCT/JP2016/070166 2016-07-07 2016-07-07 Dispositif et procédé d'affichage vidéo WO2018008128A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/070166 WO2018008128A1 (fr) 2016-07-07 2016-07-07 Dispositif et procédé d'affichage vidéo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/070166 WO2018008128A1 (fr) 2016-07-07 2016-07-07 Dispositif et procédé d'affichage vidéo

Publications (1)

Publication Number Publication Date
WO2018008128A1 true WO2018008128A1 (fr) 2018-01-11

Family

ID=60912453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/070166 WO2018008128A1 (fr) 2016-07-07 2016-07-07 Dispositif et procédé d'affichage vidéo

Country Status (1)

Country Link
WO (1) WO2018008128A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020080046A (ja) * 2018-11-13 2020-05-28 シャープ株式会社 電子機器、制御装置および電子機器の制御方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004298461A (ja) * 2003-03-31 2004-10-28 Topcon Corp 屈折測定装置
JP2006023953A (ja) * 2004-07-07 2006-01-26 Fuji Photo Film Co Ltd 情報表示システム
JP2013510613A (ja) * 2009-11-13 2013-03-28 エシロル アンテルナショナル(コンパーニュ ジェネラル ドプテーク) 個人の両眼の屈折特性の少なくとも1つを自動的に測定するための方法及び装置
JP2013109687A (ja) * 2011-11-24 2013-06-06 Kyocera Corp 携帯端末装置、プログラムおよび表示制御方法
JP2014044369A (ja) * 2012-08-28 2014-03-13 Ricoh Co Ltd 画像表示装置
JP2015103991A (ja) * 2013-11-26 2015-06-04 パナソニックIpマネジメント株式会社 画像処理装置、方法およびコンピュータプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004298461A (ja) * 2003-03-31 2004-10-28 Topcon Corp 屈折測定装置
JP2006023953A (ja) * 2004-07-07 2006-01-26 Fuji Photo Film Co Ltd 情報表示システム
JP2013510613A (ja) * 2009-11-13 2013-03-28 エシロル アンテルナショナル(コンパーニュ ジェネラル ドプテーク) 個人の両眼の屈折特性の少なくとも1つを自動的に測定するための方法及び装置
JP2013109687A (ja) * 2011-11-24 2013-06-06 Kyocera Corp 携帯端末装置、プログラムおよび表示制御方法
JP2014044369A (ja) * 2012-08-28 2014-03-13 Ricoh Co Ltd 画像表示装置
JP2015103991A (ja) * 2013-11-26 2015-06-04 パナソニックIpマネジメント株式会社 画像処理装置、方法およびコンピュータプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020080046A (ja) * 2018-11-13 2020-05-28 シャープ株式会社 電子機器、制御装置および電子機器の制御方法

Similar Documents

Publication Publication Date Title
US8441435B2 (en) Image processing apparatus, image processing method, program, and recording medium
KR102446442B1 (ko) 디지털 촬영 장치 및 그 동작 방법
JP6886117B2 (ja) 1つの表示装置に表示された画像の画質の制御方法
US10609273B2 (en) Image pickup device and method of tracking subject thereof
TWI439120B (zh) 顯示裝置
US10783654B2 (en) Information processing apparatus, information processing method, and recording medium
US20120133754A1 (en) Gaze tracking system and method for controlling internet protocol tv at a distance
CN101301236B (zh) 基于三维摄像的视力保护系统及方法
US20150317956A1 (en) Head mounted display utilizing compressed imagery in the visual periphery
JP2013533672A (ja) 三次元画像処理
EP3316568B1 (fr) Dispositif photographique numérique et son procédé de fonctionnement
JP6341755B2 (ja) 情報処理装置、方法及びプログラム並びに記録媒体
US20170372679A1 (en) Mobile Terminal for Automatically Adjusting a Text Size and a Method Thereof
CN107037584B (zh) 一种智能眼镜透视方法及系统
KR20170011362A (ko) 영상 처리 장치 및 그 방법
US20180218714A1 (en) Image display apparatus, image processing apparatus, image display method, image processing method, and storage medium
JP2021105694A (ja) 撮像装置およびその制御方法
CN107613405B (zh) 一种vr视频字幕显示方法和装置
WO2019021601A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2013148599A (ja) 表示装置
EP3561570A1 (fr) Appareil d'affichage tête haute, et procédé procurant une aide visuelle de celui-ci
WO2018008128A1 (fr) Dispositif et procédé d'affichage vidéo
CN110602475B (zh) 提高图像质量的方法及装置、vr显示设备及控制方法
US20220329740A1 (en) Electronic apparatus, method for controlling electronic apparatus, and non-transitory computer readable storage medium
JP5121999B1 (ja) 位置座標検出装置、位置座標検出方法および電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16908172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16908172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP