WO2011145788A1 - Touch screen device and user interface for the visually impaired - Google Patents

Touch screen device and user interface for the visually impaired Download PDF

Info

Publication number
WO2011145788A1
WO2011145788A1 PCT/KR2010/007753 KR2010007753W WO2011145788A1 WO 2011145788 A1 WO2011145788 A1 WO 2011145788A1 KR 2010007753 W KR2010007753 W KR 2010007753W WO 2011145788 A1 WO2011145788 A1 WO 2011145788A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
touch position
unit
voice
screen device
Prior art date
Application number
PCT/KR2010/007753
Other languages
French (fr)
Korean (ko)
Inventor
박건
박영숙
Original Assignee
(주) 에스엔아이솔라
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주) 에스엔아이솔라 filed Critical (주) 에스엔아이솔라
Publication of WO2011145788A1 publication Critical patent/WO2011145788A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • G09B21/003Teaching or communicating with blind persons using tactile presentation of the information, e.g. Braille displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present invention relates to a touch screen device for a visually impaired person and a method for implementing a user interface in the device. More particularly, when a movement from a specific touch position to another neighboring touch position is detected without releasing contact on the touch screen, the movement is performed.
  • a touch screen device for the visually-impaired person who executes an object corresponding to the one-touch touch position and outputs a voice corresponding to one touch position and executes the object corresponding to the one-touch touch position when a touch is detected at the same position. It is about.
  • the full touch type touch screen device adopts a touch screen input method that touches a character or a specific location on the screen instead of a keyboard input method, and is currently used in various fields such as computer devices, mobile communication terminals, kiosk terminals, and car AV systems. It is used a lot.
  • ATM Automatic Teller Machine
  • computer equipment for guidance in public places, etc.
  • touch screen input method In addition, ATM (Automated Teller Machine) devices for cash withdrawal, computer equipment for guidance in public places, etc. are mostly adopting the touch screen input method.
  • the full touch type touch screen device outputs a voice when an object displayed on the screen is focused with a finger, and requires an accidental operation through a random operation to a visually impaired person who cannot predict the position and direction of the object.
  • the visually impaired may not recognize the position of the object, and thus, there is a problem in that the finger may not always move toward the effective object from the focus position.
  • an object of the present invention is to visually recognize the position and direction of the object displayed on the touch screen in advance in using the touch screen device, the object is freely selected and executed
  • the present invention provides a touch screen device for the visually impaired and a method of implementing a user interface in the device.
  • a predetermined number of grids are formed, and a touch input unit in which a touch position displaying an object is disposed on each grid, and when a touch position contact by a user is detected, the position of the touched position Based on the touch sensing unit for generating a touch sensing signal including information, the touch sensing signal from the touch sensing unit, it is determined whether to continuously move to another neighboring touch position or one-touch execution without release of the touch, and An output information determination unit that requests to output or execute an object corresponding to each touch position by voice according to the determination result, and converts the object into voice data according to a voice output request from the output information determination unit to output the voice; According to the object execution request from the speech synthesis processing unit and the output information determination unit Provided is a touch screen device for the visually impaired including an object executing unit executing an object.
  • the touch screen device may further include an array structure guide unit for guiding an array (lattice) structure of the touch screen by voice when the power is turned on.
  • the object may be at least one of a program name, a function name, a menu name, a list name, a file name, a sentence, an image, a video, and an audio.
  • the touch detector obtains position information on the touched position and transmits a touch detection signal including the position information to the output information determiner.
  • the output information determination unit when the touch detection signal for the touch position is continuously maintained for more than a predetermined time, a voice output request signal for outputting an object corresponding to the touch position as a voice to the speech synthesis processing unit. send.
  • the output information determining unit detects continuous movement to another neighboring touch position without releasing the contact, the output information determining unit outputs a voice output request signal for outputting an object corresponding to each touch position touched according to the movement. Send to the synthesis processing unit.
  • the output information determination unit transmits an object execution request signal to the object execution unit to execute an object corresponding to the one-touch touch position when one touch is detected again while moving to another neighboring touch position without release of contact.
  • the output information determination unit recognizes an object corresponding to a touch position having a high ratio by calculating a contact ratio when the location information is at a boundary between different touch positions.
  • the grid of the touch input unit is consistently developed according to a predetermined rule, and there is no invalid area.
  • the voice synthesis processor If the object is not displayed in the touch position contacted by the user, the voice synthesis processor outputs a voice meaning 'empty'.
  • step (a) if a specific touch position contact by the user is detected, determining whether the contact exceeds a predetermined predetermined time, (b) the determination result of step (a) exceeds a predetermined time In response to the voice outputting the object corresponding to the touch position, (c) If a movement from the specific touch position to another adjacent touch position is detected without releasing the contact, the corresponding touch position.
  • the present invention provides a method for implementing a user interface in a touch screen device for the visually impaired, which includes outputting an object as a voice and executing an object corresponding to the one-touch touch position when one-touch is detected again.
  • the grid structure of the touch screen is output as a voice.
  • the method may further include executing an object corresponding to the touched position when the contact release is detected in a state in which the determination result of the step (a) does not exceed a predetermined time.
  • the position and direction of the object displayed on the screen are recognized in advance, so that the object can be freely manipulated.
  • the visually impaired can easily select a menu and execute a program.
  • FIG. 1 is a block diagram schematically showing the configuration of a touch screen device for the visually impaired according to the present invention.
  • FIG. 2 is an exemplary diagram in which a grid of the touch input unit illustrated in FIG. 1 includes 12 touch positions of 3 by 4 in total.
  • FIG. 3 is a flowchart illustrating a method for implementing a user interface in a touch screen device for the visually impaired according to the present invention.
  • touch screen device 102 touch input unit
  • FIG. 1 is a block diagram schematically showing the configuration of a touch screen device for the visually impaired according to the present invention.
  • the touch screen device 100 for the visually impaired includes a touch input unit 102, a touch sensing unit 104, an output information determining unit 106, a speech synthesis processing unit 108, and an audio output unit 110. ), And an object execution unit 112.
  • the touch input unit 102 has a predetermined number of grids of the same size adjacent to each other, and a touch position on which an object is displayed is disposed on each grid. That is, a function item to be selected, that is, an object, is displayed on each touch position.
  • the object includes a program name, a function name, a menu name, a list name, a file name, a sentence, an image, a video, and an audio.
  • the touch input unit 102 sets a touch position on the touch screen without an invalid area according to a predetermined arrangement rule, and the user selects an object by touching or releasing the contact using a finger.
  • the arrangement rule may be configured in various ways such as an arrangement structure in which a total of 12 touch positions of 3 by 4 are arranged, an arrangement structure in which a total of 10 touch positions of 2 by 5 are arranged, and the like.
  • the touch position of the touch input unit 102 will be described using an arrangement of '3 by 4' as an example.
  • the touch input unit 102 is divided into 3 by 4 screen partitions, which means a partition like a checkerboard, and the scale means 3 columns horizontally and 4 columns vertically.
  • the divided screen partitions mean only a touch range, and the shape of the grid is not actually displayed on the screen. That is, there are a total of 12 touch positions in a 3 by 4 touch partition.
  • the new 12 screen sections are filled by the 'switch screen' function. At this time, two of the 12 touch positions to change the screen to place the object 'next screen', 'previous screen'.
  • one or more touch positions may be occupied according to the function and purpose of the object. That is, one image object does not occupy only one touch position and may fill over all 12 areas. This is because distortion of the information can be prevented by maintaining the suitability of the display form in the nature of the image. For this reason, videos, sentences, and list objects can be displayed by filling two or more screen sections.
  • the objects displayed on the touch screen in the full touch-type touch screen device 100 are arranged with a predetermined rule in the entire process until performing for one use purpose, and the objects are adjacent to each other by the arrangement. Therefore, the invalid area does not exist by being displayed over the entire screen area.
  • the touch input unit 102 is synchronized with the touch screen output unit. That is, when the user touches / releases the touch screen output unit with a finger, the touch input unit 102 is activated.
  • the touch detection unit 104 When the touch position contact is detected by the user, the touch detection unit 104 generates a touch detection signal including the contacted position information.
  • the touch detector 104 obtains position information on the touched position, and transmits a touch detection signal including the position information to the output information determiner 106.
  • the output information determining unit 106 determines whether to continuously move to another neighboring touch position or perform one-touch execution without release of contact, based on the touch detection signal from the touch sensing unit 104, and the determination result According to the request to output or execute the voice corresponding to each touch position.
  • the release of the contact refers to an action in which the user releases a finger (or a pointer) in contact with the touch position.
  • the output information determination unit 106 generates a voice output request signal for outputting a voice corresponding to the touch position when the touch detection signal for the touch position is maintained for more than a predetermined time.
  • the voice synthesis processor 108 transmits the same.
  • the output information determination unit 106 detects a continuous movement to another neighboring touch position without releasing the contact, the output information determination unit 106 outputs a voice corresponding to each touch position contacted according to the movement.
  • the signal is transmitted to the speech synthesis processor 108.
  • the voice output request signal includes object information corresponding to the corresponding touch position.
  • the output information determining unit 106 when the touch detection signal for the touch position does not exceed a predetermined time, the object execution unit for executing the object execution request signal to execute the object corresponding to the touch position. Send to 112.
  • the object execution request signal for executing the object corresponding to the one-touch touch position is executed.
  • the one-touch means that the user releases contact at an arbitrary touch position while moving to another neighboring touch position without releasing the contact, and then re-contacts the touch-release touch position and releases the hand within a predetermined time.
  • the output information determination unit 106 calculates a contact ratio to recognize an object corresponding to a touch position having a high ratio.
  • the voice synthesis processor 108 converts the corresponding object into voice data and outputs the voice through the sound output unit 110.
  • the sound output unit 110 refers to a speaker or the like.
  • the voice synthesis processor 108 outputs a voice meaning 'empty'.
  • the object executing unit 112 executes the corresponding object. That is, the object execution unit 112 loads and executes an object included in the object execution request signal.
  • the touch screen device 100 may further include an array structure guide unit (not shown) for guiding voices of the touch screen array or the lattice structure when the power is turned on. That is, when the lattice structure (array structure) is 3 by 4, the arrangement structure guide part outputs a guide voice of '3 by 4 top menu'. Then, the user recognizes the touch area and the size of the screen through the guide voice.
  • an array structure guide unit (not shown) for guiding voices of the touch screen array or the lattice structure when the power is turned on. That is, when the lattice structure (array structure) is 3 by 4, the arrangement structure guide part outputs a guide voice of '3 by 4 top menu'. Then, the user recognizes the touch area and the size of the screen through the guide voice.
  • the touch screen device 100 as described above refers to a computer, a mobile communication terminal, and the like.
  • FIG. 2 is an exemplary diagram in which a grid of the touch input unit illustrated in FIG. 1 includes 12 touch positions of 3 by 4 in total.
  • the objects are displayed in 12 touch positions as shown in (d). At this time, since the set of objects is less than 12, an empty touch position exists. In addition, the object 'You can use a theme to decorate your computer in the style you want' is insufficient for one touch position, so it occupies two touch positions.
  • the objects are displayed in 12 touch positions as shown in (f), and the empty touch positions exist because the set of objects is less than 12.
  • the touch screen device outputs a voice meaning 'empty'.
  • the computer screen has been described as an example, but the screen is outputted in the same manner as described above in a mobile communication terminal such as a smart phone or an iPhone.
  • FIG. 3 is a flowchart illustrating a method for implementing a user interface in a touch screen device for the visually impaired according to the present invention.
  • the touch screen device determines whether the contact exceeds a predetermined time (S302).
  • the touch screen device outputs a voice to the grid structure of the touch screen. For example, when the grid structure is 3 by 4, the touch screen device outputs a guide voice of '3 by 4 top menu'. The user recognizes the touch area of the screen and its size through the guide voice.
  • the touch screen device If it is determined in S302 that the predetermined time has been exceeded, the touch screen device outputs an object corresponding to the touched position by voice (S304).
  • the touch screen device immediately voices an object corresponding to the touch position to which the finger is moved. Output This is possible because it accompanies the information that the finger is not removed from the neighboring touch positions. At this time, in order to execute the newly located object, the user re-contacts the same position after releasing a finger and immediately releases a hand before a predetermined time elapses.
  • the touch screen device immediately outputs a predetermined signal sound, and if the one-touch touch position is an empty object, it outputs a message indicating that it cannot be executed immediately.
  • step S308 if the one-touch in any touch position is detected (S310), the touch screen device executes the object corresponding to the one-touch touch position (S312).
  • the touch screen device executes an object corresponding to the released touch position (S316). That is, if the user immediately releases the finger without passing the predetermined time, the touch screen device executes the object and does not output the voice for the object. For example, there are A, B, and C objects. If the user touches A with his or her finger and then releases his finger without exceeding a predetermined time (for example, 0.15 second), the touch screen device generates a voice. Run A directly without output. This is because the user knows the location of the object and has manipulated it, so it is reasonable to execute it immediately.
  • a predetermined time for example 0.15 second
  • the touch screen device when the user turns on the power of the touch screen device, the touch screen device outputs a guide voice of '3 by 4 top menu'.
  • the user recognizes the touch area and the size of the screen as well as the current menu position at the moment of hearing the guide voice.
  • the touch screen device voices an e-mail, a skype, and indicates a point.
  • the voice output of 'skype' 'skype' is executed.

Abstract

The present invention relates to a touch screen device for the visually impaired and to a method for providing a user interface therefor. The touch screen device includes: a touch input unit having a predetermined number of grids, each grid having a touch position at which an object is displayed; a contact-sensing unit generating a sensed-contact signal including position information for a contacted touch position when a user contacting a touch position is sensed; an output information determining unit determining whether to continuously move to another adjacent touch position without the release of contact or to execute one touch on the basis of the sensed-contact signal from the contact-sensing unit, and requesting an object corresponding to each touch position to be outputted as audio or to be executed according to the determination result; an audio-synthesis processing unit converting a corresponding object into audio data according to an audio output request from the output information determining unit, to output the audio data as audio; and an object-executing unit executing a corresponding object according to an object execution request from the output information determining unit. Thus, according to the present intention, when a visually impaired person uses a full-touch information-processing device, the position and direction of an object displayed on the screen are recognized, such that the object may be manipulated as the person desires.

Description

시각 장애인을 위한 터치스크린 장치 및 사용자 인터페이스Touch screen device and user interface for the visually impaired
본 발명은 시각 장애인을 위한 터치스크린 장치 및 그 장치에서 사용자 인터페이스 구현 방법에 관한 것으로, 더욱 상세하게는 터치스크린에서 접촉 해제 없이 특정 터치포지션으로부터 이웃한 다른 터치포지션으로의 이동이 감지되면, 상기 이동한 터치포지션에 대응하는 객체를 음성으로 출력하고, 같은 위치에서 원터치가 감지된 경우, 상기 원터치된 터치포지션에 대응하는 객체를 실행하는 시각 장애인을 위한 터치스크린 장치 및 그 장치에서 사용자 인터페이스 구현 방법에 관한 것이다.The present invention relates to a touch screen device for a visually impaired person and a method for implementing a user interface in the device. More particularly, when a movement from a specific touch position to another neighboring touch position is detected without releasing contact on the touch screen, the movement is performed. A touch screen device for the visually-impaired person who executes an object corresponding to the one-touch touch position and outputs a voice corresponding to one touch position and executes the object corresponding to the one-touch touch position when a touch is detected at the same position. It is about.
풀터치 방식의 터치스크린 장치는 키보드 입력 방식 대신 화면에 나타난 문자나 특정 위치를 터치하는 터치스크린 입력 방식을 채용한 장치로서, 현재 컴퓨터 기기, 이동통신 단말기, 키오스크 단말기, 카AV시스템 등 다양한 분야에서 많이 활용되고 있다.The full touch type touch screen device adopts a touch screen input method that touches a character or a specific location on the screen instead of a keyboard input method, and is currently used in various fields such as computer devices, mobile communication terminals, kiosk terminals, and car AV systems. It is used a lot.
또한 현금 인출을 위한 ATM(Automated Teller Machine) 기기, 공공 장소의 안내용 컴퓨터 기기 등은 터치스크린 입력 방식을 채용 하는 경우가 대부분이다.In addition, ATM (Automated Teller Machine) devices for cash withdrawal, computer equipment for guidance in public places, etc. are mostly adopting the touch screen input method.
시각 장애인이 풀터치 방식으로 접근이 가능한 것은 아이폰의 'VOICE OVER'가 있다. 이러한 풀터치 방식의 터치스크린 장치는 스크린에 표시된 객체를 손가락으로 포커스 하였을 때 음성으로 출력하는 것으로서, 객체의 위치 및 방향을 예측할 수 없는 시각 장애인에게 무작위적 조작을 통해 우연한 동작을 요구한다.One of the most accessible ways for the visually impaired is the iPhone's VOICE OVER. The full touch type touch screen device outputs a voice when an object displayed on the screen is focused with a finger, and requires an accidental operation through a random operation to a visually impaired person who cannot predict the position and direction of the object.
상기와 같은 풀터치 방식의 터치스크린 장치 사용자가 포커스한 위치가 항상 객체임을 확신할 수 없는 수준에서 이용해야 하는 이른바 무효 터치가 발생하는 문제가 있다.There is a problem that a so-called invalid touch, which should be used at a level where it is not certain that the user's focused position is always an object, as described above.
또한, 시각 장애인은 객체의 위치를 인지할 수 없기 때문에 손가락이 포커스한 위치에서 항상 유효 객체만을 향해 이동할 수 없는 문제가 있다.In addition, the visually impaired may not recognize the position of the object, and thus, there is a problem in that the finger may not always move toward the effective object from the focus position.
본 발명은 상기한 문제점을 해결하기 위하여 안출한 것으로, 본 발명의 목적은 시각 장애인이 터치스크린 장치를 사용함에 있어서, 터치 스크린에 표시된 객체의 위치와 방향을 사전에 인식하여 객체를 자유롭게 선택 및 실행시킬 수 있는 시각 장애인을 위한 터치스크린 장치 및 그 장치에서 사용자 인터페이스 구현 방법을 제공하는데 있다.The present invention has been made to solve the above problems, an object of the present invention is to visually recognize the position and direction of the object displayed on the touch screen in advance in using the touch screen device, the object is freely selected and executed The present invention provides a touch screen device for the visually impaired and a method of implementing a user interface in the device.
상기 목적들을 달성하기 위하여 본 발명에 따르면, 일정 개수의 격자가 형성되고, 각 격자에는 객체가 표시된 터치포지션이 배치되는 터치 입력부, 사용자에 의한 터치포지션 접촉이 감지되면, 상기 접촉된 터치포지션의 위치 정보를 포함하는 접촉 감지 신호를 생성하는 접촉 감지부, 상기 접촉 감지부로부터의 접촉 감지 신호를 근거로, 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동 여부 또는 원터치 실행 여부를 판단하고, 그 판단결과에 따라 각 터치포지션에 대응하는 객체를 음성으로 출력 또는 실행하도록 요청하는 출력 정보 판단부, 상기 출력 정보 판단부로부터의 음성 출력 요청에 따라 해당 객체를 음성 데이터로 변환하여 음성으로 출력되도록 하는 음성 합성 처리부 및 상기 출력 정보 판단부로부터의 객체 실행 요청에 따라 해당 객체를 실행하는 객체 실행부를 포함하는 시각 장애인을 위한 터치스크린 장치가 제공된다.In order to achieve the above objects, according to the present invention, a predetermined number of grids are formed, and a touch input unit in which a touch position displaying an object is disposed on each grid, and when a touch position contact by a user is detected, the position of the touched position Based on the touch sensing unit for generating a touch sensing signal including information, the touch sensing signal from the touch sensing unit, it is determined whether to continuously move to another neighboring touch position or one-touch execution without release of the touch, and An output information determination unit that requests to output or execute an object corresponding to each touch position by voice according to the determination result, and converts the object into voice data according to a voice output request from the output information determination unit to output the voice; According to the object execution request from the speech synthesis processing unit and the output information determination unit Provided is a touch screen device for the visually impaired including an object executing unit executing an object.
상기 터치스크린 장치는 전원 온(on)의 수행 시, 터치스크린의 배열(격자) 구조를 음성으로 안내하는 배열 구조 안내부를 더 포함할 수 있다.The touch screen device may further include an array structure guide unit for guiding an array (lattice) structure of the touch screen by voice when the power is turned on.
상기 객체는 프로그램명, 기능명, 메뉴명, 리스트명, 파일명, 문장, 이미지, 동영상, 오디오 중 적어도 하나일 수 있다.The object may be at least one of a program name, a function name, a menu name, a list name, a file name, a sentence, an image, a video, and an audio.
상기 접촉 감지부는 상기 접촉된 터치포지션에 대한 위치정보를 구하고, 상기 위치 정보가 포함된 접촉 감지 신호를 상기 출력 정보 판단부로 전송한다.The touch detector obtains position information on the touched position and transmits a touch detection signal including the position information to the output information determiner.
상기 출력 정보 판단부는, 상기 터치포지션에 대한 접촉 감지 신호가 미리 정해진 일정 시간을 초과하여 계속 유지하는 경우, 상기 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부로 전송한다. The output information determination unit, when the touch detection signal for the touch position is continuously maintained for more than a predetermined time, a voice output request signal for outputting an object corresponding to the touch position as a voice to the speech synthesis processing unit. send.
또한, 상기 출력 정보 판단부는 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동이 감지되면, 그 이동에 따라 접촉된 각 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부로 전송한다. In addition, when the output information determining unit detects continuous movement to another neighboring touch position without releasing the contact, the output information determining unit outputs a voice output request signal for outputting an object corresponding to each touch position touched according to the movement. Send to the synthesis processing unit.
또한, 상기 출력 정보 판단부는 접촉해제없이 이웃한 다른 터치포지션으로 이동하는 중에 재차 원터치가 감지되면, 상기 원터치된 터치포지션에 대응하는 객체를 실행하도록 하는 객체 실행 요청 신호를 상기 객체 실행부로 전송한다.The output information determination unit transmits an object execution request signal to the object execution unit to execute an object corresponding to the one-touch touch position when one touch is detected again while moving to another neighboring touch position without release of contact.
또한, 상기 출력 정보 판단부는 상기 위치 정보가 서로 다른 터치포지션의 경계에 있는 경우, 접촉된 비율을 연산하여 높은 비율을 가진 터치포지션에 대응하는 객체를 인식한다. The output information determination unit recognizes an object corresponding to a touch position having a high ratio by calculating a contact ratio when the location information is at a boundary between different touch positions.
상기 터치 입력부의 격자는 미리 정해진 일정 규칙에 따라 일관되게 전개되고, 무효 영역이 존재하지 않는다. The grid of the touch input unit is consistently developed according to a predetermined rule, and there is no invalid area.
상기 음성 합성 처리부는 사용자에 의해 접촉된 터치포지션에 객체가 표시되어 있지 않으면, '비어있음'을 의미하는 음성이 출력되도록 한다. If the object is not displayed in the touch position contacted by the user, the voice synthesis processor outputs a voice meaning 'empty'.
또한, 본 발명에 따르면, (a)사용자에 의한 특정 터치포지션 접촉이 감지되면, 상기 접촉이 미리 정해진 일정 시간을 초과하는지 판단하는 단계, (b)상기 (a)단계의 판단결과 일정 시간을 초과하는 경우, 상기 접촉된 터치포지션에 대응하는 객체를 음성으로 출력하는 단계, (c)접촉해제없이 상기 특정 터치포지션으로부터 이웃한 다른 터치포지션으로의 이동이 감지되면, 상기 이동한 터치포지션에 대응하는 객체를 음성으로 출력하고, 재차 원터치가 감지된 경우, 상기 원터치된 터치포지션에 대응하는 객체를 실행하는 단계를 포함하는 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법이 제공된다. In addition, according to the present invention, (a) if a specific touch position contact by the user is detected, determining whether the contact exceeds a predetermined predetermined time, (b) the determination result of step (a) exceeds a predetermined time In response to the voice outputting the object corresponding to the touch position, (c) If a movement from the specific touch position to another adjacent touch position is detected without releasing the contact, the corresponding touch position The present invention provides a method for implementing a user interface in a touch screen device for the visually impaired, which includes outputting an object as a voice and executing an object corresponding to the one-touch touch position when one-touch is detected again.
상기 (a)단계 이전에, 파워 온이 수행되면, 터치스크린의 격자 구조가 음성으로 출력된다. Before the step (a), when the power on is performed, the grid structure of the touch screen is output as a voice.
상기 (a)단계의 판단결과 일정 시간을 초과하지 않은 상태에서 접촉 해제가 감지되면, 상기 접촉된 터치포지션에 대응하는 객체를 실행하는 단계를 더 포함한다. The method may further include executing an object corresponding to the touched position when the contact release is detected in a state in which the determination result of the step (a) does not exceed a predetermined time.
상술한 바와 같이 본 발명에 따르면, 시각장애인이 풀터치 방식의 정보 처리 장치를 사용함에 있어서, 스크린에 표시된 객체의 위치와 방향을 사전에 인지하게 되므로 객체를 자유롭게 조작할 수 있다.As described above, according to the present invention, when the visually impaired person uses the full touch type information processing apparatus, the position and direction of the object displayed on the screen are recognized in advance, so that the object can be freely manipulated.
또한, 풀터치 방식을 사용하는 모든 정보 처리 장치에 시각 장애인의 사용성 및 접근성을 확보할 수 있고, 별도의 물리적 부가 장치를 추가하지 않고 풀터치의 용법을 그대로 활용함으로써 시각 장애인의 사용성 확보를 위한 추가 비용이 절감되는 효과가 있다. In addition, it is possible to secure the usability and accessibility of the visually impaired to all information processing devices that use the full touch method, and to add us to secure the usability of the visually impaired by utilizing the full touch usage without additional physical additional devices. The cost is reduced.
또한, 시각 장애인이 용이하게 메뉴를 선택할 수 있고, 프로그램을 실행할 수 있다.In addition, the visually impaired can easily select a menu and execute a program.
도 1은 본 발명에 따른 시각 장애인을 위한 터치스크린 장치의 구성을 개략적으로 나타낸 블록도이다.1 is a block diagram schematically showing the configuration of a touch screen device for the visually impaired according to the present invention.
도 2는 도1에 도시된 터치 입력부의 격자가 3 by 4의 총 12개의 터치 포지션으로 구성된 예시도이다.FIG. 2 is an exemplary diagram in which a grid of the touch input unit illustrated in FIG. 1 includes 12 touch positions of 3 by 4 in total.
도 3은 본 발명에 따른 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법을 나타낸 흐름도이다. 3 is a flowchart illustrating a method for implementing a user interface in a touch screen device for the visually impaired according to the present invention.
<도면의 주요 부분에 대한 부호의 설명><Explanation of symbols for the main parts of the drawings>
100 : 터치스크린 장치 102 : 터치 입력부100: touch screen device 102: touch input unit
104 : 접촉 감지부 106 : 출력 정보 판단부104: touch detection unit 106: output information determination unit
108 : 음성 합성 처리부 110 : 음향 출력부108: speech synthesis processing unit 110: sound output unit
112 : 객체 실행부112: object execution unit
도 1은 본 발명에 따른 시각 장애인을 위한 터치스크린 장치의 구성을 개략적으로 나타낸 블럭도이다. 1 is a block diagram schematically showing the configuration of a touch screen device for the visually impaired according to the present invention.
도 1을 참조하면, 시각 장애인을 위한 터치스크린 장치(100)는 터치 입력부(102), 접촉 감지부(104), 출력 정보 판단부(106), 음성 합성 처리부(108), 음향 출력부(110), 객체 실행부(112)를 포함한다. Referring to FIG. 1, the touch screen device 100 for the visually impaired includes a touch input unit 102, a touch sensing unit 104, an output information determining unit 106, a speech synthesis processing unit 108, and an audio output unit 110. ), And an object execution unit 112.
상기 터치 입력부(102)는 크기가 같은 일정 개수의 격자가 서로 인접하게 형성되고, 각 격자에는 객체가 표시된 터치포지션이 배치되어 있다. 즉, 각 터치포지션에는 선택할 기능 항목 즉, 객체가 표시되는데, 상기 객체는 프로그램명, 기능명, 메뉴명, 리스트명, 파일명, 문장, 이미지, 동영상, 오디오 등으로 이루어진다. The touch input unit 102 has a predetermined number of grids of the same size adjacent to each other, and a touch position on which an object is displayed is disposed on each grid. That is, a function item to be selected, that is, an object, is displayed on each touch position. The object includes a program name, a function name, a menu name, a list name, a file name, a sentence, an image, a video, and an audio.
또한, 상기 터치 입력부(102)는 미리 정해진 배열 규칙에 따라 무효 영역 없이 터치스크린에 터치포지션이 설정되고, 사용자는 손가락을 이용한 접촉 또는 접촉 해제를 통해 객체를 선택한다. 여기서, 상기 배열 규칙은 3 by 4의 총 12개의 터치포지션이 배치된 배열 구조, 2 by 5의 총 10개의 터치포지션이 배치된 배열 구조 등으로 다양하게 구성할 수 있다. In addition, the touch input unit 102 sets a touch position on the touch screen without an invalid area according to a predetermined arrangement rule, and the user selects an object by touching or releasing the contact using a finger. The arrangement rule may be configured in various ways such as an arrangement structure in which a total of 12 touch positions of 3 by 4 are arranged, an arrangement structure in which a total of 10 touch positions of 2 by 5 are arranged, and the like.
상기 터치 입력부(102)의 터치포지션에 대해 '3 by 4'의 배열을 예로 하여 설명하기로 한다. 상기 터치입력부(102)는 3 by 4의 화면 구획으로 분할되고, 이는 마치 바둑판 모양과 같은 구획을 의미하며, 규모는 가로로 3칸, 세로로 4칸을 의미한다. 이때, 상기 분할된 화면 구획은 터치 범위를 의미할 뿐 실제로 화면에 격자의 모양이 표시되는 것은 아니다. 즉, 이는 3 by 4의 터치 구획으로 총 12개의 터치 포지션이 존재한다.The touch position of the touch input unit 102 will be described using an arrangement of '3 by 4' as an example. The touch input unit 102 is divided into 3 by 4 screen partitions, which means a partition like a checkerboard, and the scale means 3 columns horizontally and 4 columns vertically. In this case, the divided screen partitions mean only a touch range, and the shape of the grid is not actually displayed on the screen. That is, there are a total of 12 touch positions in a 3 by 4 touch partition.
그리고, 하나의 목적으로 구성된 객체의 집합이 12개를 초과하면, '화면 전환'기능을 통해 새로운 12개의 화면 구획을 채운다. 이때, 화면 전환을 위해 12개의 터치포지션 중 2개는 '다음 화면', '이전화면'이라는 객체를 둔다. If more than 12 sets of objects configured for one purpose are filled, the new 12 screen sections are filled by the 'switch screen' function. At this time, two of the 12 touch positions to change the screen to place the object 'next screen', 'previous screen'.
또한, 하나의 목적으로 구성된 객체의 집합이 12개 미만으로 채워지면, 나머지 빈 터치포지션은 '비어있음'이라는 객체로 채워진다.In addition, if the set of objects configured for one purpose is filled with less than 12, the remaining empty touch positions are filled with the object 'empty'.
또한, 객체를 3 by 4의 화면 구획에 표시함에 있어, 객체의 기능과 목적에 따라 터치포지션을 하나 이상 차지할 수 있다. 즉, 1개의 이미지 객체는 1개의 터치포지션만 차지하지 않고, 12개 전 영역에 걸쳐 채울 수 있다. 이는 이미지의 속성상 표시 형태의 적합성을 유지함으로써 정보의 왜곡을 방지할 수 있기 때문이다. 동영상, 문장, 리스트 객체도 이와 같은 이유로 2개 이상의 화면 구획을 채우며 표시할 수 있다.In addition, in displaying an object on a screen by 3 by 4, one or more touch positions may be occupied according to the function and purpose of the object. That is, one image object does not occupy only one touch position and may fill over all 12 areas. This is because distortion of the information can be prevented by maintaining the suitability of the display form in the nature of the image. For this reason, videos, sentences, and list objects can be displayed by filling two or more screen sections.
상기와 같이 풀터치 방식의 터치스크린 장치(100)에서 터치 스크린에 표시되는 객체는 하나의 사용 목적으로 수행하기까지의 전 과정에서 일정한 규칙을 가지고 배열되고, 상기 배열에 의해 객체와 객체가 인접하고, 스크린 전 영역에 걸쳐 표시됨으로써 무효 영역이 존재하지 않게 된다.As described above, the objects displayed on the touch screen in the full touch-type touch screen device 100 are arranged with a predetermined rule in the entire process until performing for one use purpose, and the objects are adjacent to each other by the arrangement. Therefore, the invalid area does not exist by being displayed over the entire screen area.
또한, 상기 터치 입력부(102)는 터치 스크린 출력부와 동기화된다. 즉, 터치스크린 출력부에 사용자가 손가락으로 접촉/접촉 해제를 하면 터치 입력부(102)가 활성화된다.In addition, the touch input unit 102 is synchronized with the touch screen output unit. That is, when the user touches / releases the touch screen output unit with a finger, the touch input unit 102 is activated.
상기 접촉 감지부(104)는 사용자에 의한 터치포지션 접촉이 감지되면, 상기 접촉된 위치 정보를 포함하는 접촉 감지 신호를 생성한다.When the touch position contact is detected by the user, the touch detection unit 104 generates a touch detection signal including the contacted position information.
즉, 상기 접촉 감지부(104)는 접촉된 터치포지션에 대한 위치정보를 구하고, 상기 위치 정보가 포함된 접촉 감지 신호를 상기 출력 정보 판단부(106)로 전송한다.That is, the touch detector 104 obtains position information on the touched position, and transmits a touch detection signal including the position information to the output information determiner 106.
상기 출력 정보 판단부(106)는 상기 접촉 감지부(104)로부터의 접촉 감지 신호를 근거로, 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동 여부 또는 원터치 실행 여부를 판단하고, 그 판단결과에 따라 각 터치포지션에 대응하는 객체를 음성으로 출력 또는 실행하도록 요청한다. 여기서, 접촉 해제는 사용자가 터치포지션에 접촉한 손가락(또는 포인터)을 떼는 행위를 말한다.The output information determining unit 106 determines whether to continuously move to another neighboring touch position or perform one-touch execution without release of contact, based on the touch detection signal from the touch sensing unit 104, and the determination result According to the request to output or execute the voice corresponding to each touch position. In this case, the release of the contact refers to an action in which the user releases a finger (or a pointer) in contact with the touch position.
즉, 상기 출력 정보 판단부(106)는 상기 터치포지션에 대한 접촉 감지 신호가 미리 정해진 일정 시간을 초과하여 계속 유지하는 경우, 상기 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부(108)로 전송한다. That is, the output information determination unit 106 generates a voice output request signal for outputting a voice corresponding to the touch position when the touch detection signal for the touch position is maintained for more than a predetermined time. The voice synthesis processor 108 transmits the same.
또한, 상기 출력 정보 판단부(106)는 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동이 감지되면, 그 이동에 따라 접촉된 각 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부(108)로 전송한다. 상기 음성 출력 요청 신호는 해당 터치포지션에 대응하는 객체 정보를 포함한다.In addition, when the output information determining unit 106 detects a continuous movement to another neighboring touch position without releasing the contact, the output information determination unit 106 outputs a voice corresponding to each touch position contacted according to the movement. The signal is transmitted to the speech synthesis processor 108. The voice output request signal includes object information corresponding to the corresponding touch position.
또한, 상기 출력 정보 판단부(106)는 상기 터치포지션에 대한 접촉 감지 신호가 미리 정해진 일정 시간을 초과하지 않은 경우, 상기 터치포지션에 대응하는 객체를 실행하도록 하는 객체 실행 요청 신호를 상기 객체 실행부(112)로 전송한다. In addition, the output information determining unit 106, when the touch detection signal for the touch position does not exceed a predetermined time, the object execution unit for executing the object execution request signal to execute the object corresponding to the touch position. Send to 112.
또한, 상기 출력 정보 판단부(106)는 접촉해제없이 이웃한 다른 터치포지션으로 이동하는 중에 재차 원터치가 감지되면, 상기 원터치된 터치포지션에 대응하는 객체를 실행하도록 하는 객체 실행 요청 신호를 상기 객체 실행부(112)로 전송한다. 여기서, 상기 원터치는 사용자가 접촉 해제 없이 이웃한 다른 터치포지션으로 이동하면서 임의 터치포지션에서 접촉 해제한 후, 상기 접촉 해제된 터치포지션에 재접촉하였다가 정한 시간 안에 손을 떼는 것을 말한다.In addition, when the one-touch is detected again while the output information determination unit 106 moves to another neighboring touch position without release of contact, the object execution request signal for executing the object corresponding to the one-touch touch position is executed. Transfer to section 112. Here, the one-touch means that the user releases contact at an arbitrary touch position while moving to another neighboring touch position without releasing the contact, and then re-contacts the touch-release touch position and releases the hand within a predetermined time.
또한, 상기 출력 정보 판단부(106)는 상기 위치 정보가 서로 다른 터치포지션의 경계에 있는 경우, 접촉된 비율을 연산하여 높은 비율을 가진 터치포지션에 대응하는 객체를 인식하게 된다. In addition, when the location information is at the boundary between different touch positions, the output information determination unit 106 calculates a contact ratio to recognize an object corresponding to a touch position having a high ratio.
상기 음성 합성 처리부(108)는 상기 출력 정보 판단부(106)로부터 음성 출력 요청 신호가 수신되면, 해당 객체를 음성 데이터로 변환하여 상기 음향 출력부(110)를 통해 음성으로 출력되도록 한다. 여기서, 상기 음향 출력부(110)는 스피커 등을 말한다.When the voice output request signal is received from the output information determiner 106, the voice synthesis processor 108 converts the corresponding object into voice data and outputs the voice through the sound output unit 110. Here, the sound output unit 110 refers to a speaker or the like.
또한, 상기 음성 합성 처리부(108)는 사용자에 의해 접촉된 터치포지션에 객체가 표시되어 있지 않으면, '비어있음'을 의미하는 음성이 출력되도록 한다. In addition, if the object is not displayed in the touch position touched by the user, the voice synthesis processor 108 outputs a voice meaning 'empty'.
상기 객체 실행부(112)는 상기 출력 정보 판단부(106)로부터 객체 실행 요청 신호가 수신되면, 해당 객체를 실행한다. 즉, 상기 객체 실행부(112)는 상기 객체 실행 요청 신호에 포함된 객체를 불러와 실행하게 된다.When the object execution request signal is received from the output information determining unit 106, the object executing unit 112 executes the corresponding object. That is, the object execution unit 112 loads and executes an object included in the object execution request signal.
상기 터치스크린 장치(100)는 전원 온(on)의 수행 시, 터치스크린의 배열 또는 격자 구조를 음성으로 안내하는 배열 구조 안내부(미도시)를 더 포함할 수 있다. 즉, 상기 배열 구조 안내부는 격자구조(배열구조)가 3 by 4인 경우 '3 by 4 탑메뉴'라는 안내 음성을 출력한다. 그러면, 사용자는 상기 안내 음성을 통해 스크린의 터치 영역과 그 규모를 인식하게 된다.The touch screen device 100 may further include an array structure guide unit (not shown) for guiding voices of the touch screen array or the lattice structure when the power is turned on. That is, when the lattice structure (array structure) is 3 by 4, the arrangement structure guide part outputs a guide voice of '3 by 4 top menu'. Then, the user recognizes the touch area and the size of the screen through the guide voice.
상기와 같은 터치 스크린 장치(100)는 컴퓨터, 이동통신 단말기 등을 말한다. The touch screen device 100 as described above refers to a computer, a mobile communication terminal, and the like.
도 2는 도1에 도시된 터치 입력부의 격자가 3 by 4의 총 12개의 터치 포지션으로 구성된 예시도이다.FIG. 2 is an exemplary diagram in which a grid of the touch input unit illustrated in FIG. 1 includes 12 touch positions of 3 by 4 in total.
도 2를 참조하면, (a)와 같은 바탕 화면의 경우 (b)와 같이 12개의 터치포지션에 객체가 표시된다. 이때, (a)의 바탕화면에 출력된 메뉴 또는 프로그램은 12개 이상이므로, 12개중 하나의 터치포지션에는 '다음(NEXT)'의 객체가 표시된다. Referring to FIG. 2, in the case of a background screen as shown in (a), objects are displayed in 12 touch positions as shown in (b). In this case, since there are 12 or more menus or programs output on the background screen of (a), an object of 'Next' is displayed on one of the 12 touch positions.
(c)와 같은 '디스플레이 등록 정보'의 경우, (d)와 같이 12개의 터치포지션에 객체가 표시된다. 이때, 객체의 집합이 12개 미만이므로, 빈 터치포지션이 존재한다. 또한, '테마를 사용하여 컴퓨터를 사용자가 원하는 스타일로 꾸밀 수 있습니다'라는 객체는 1개의 터치포지션으로는 부족하여 2개의 터치포지션을 차지한다. In the case of 'display registration information' as shown in (c), the objects are displayed in 12 touch positions as shown in (d). At this time, since the set of objects is less than 12, an empty touch position exists. In addition, the object 'You can use a theme to decorate your computer in the style you want' is insufficient for one touch position, so it occupies two touch positions.
(e)와 같은 메뉴의 경우, (f)와 같이 12개의 터치포지션에 객체가 표시되고, 객체의 집합이 12개 미만이므로 빈 터치포지션이 존재한다. 사용자가 빈 터치포지션을 접촉한 경우, 터치스크린 장치는 '비어있음'을 의미하는 음성을 출력하게 된다. In the case of the menu as shown in (e), the objects are displayed in 12 touch positions as shown in (f), and the empty touch positions exist because the set of objects is less than 12. When the user touches an empty touch position, the touch screen device outputs a voice meaning 'empty'.
여기에서는 컴퓨터 화면을 예시로 하여 설명하였으나, 터치방식의 스마트폰, 아이폰 등의 이동통신 단말기에서도 상기와 같은 형태로 화면이 출력된다.Herein, the computer screen has been described as an example, but the screen is outputted in the same manner as described above in a mobile communication terminal such as a smart phone or an iPhone.
도 3은 본 발명에 따른 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법을 나타낸 흐름도이다. 3 is a flowchart illustrating a method for implementing a user interface in a touch screen device for the visually impaired according to the present invention.
도 3을 참조하면, 터치스크린 장치는 사용자에 의한 특정 터치포지션 접촉이 감지되면(S300), 상기 접촉이 미리 정해진 일정 시간을 초과하는지의 여부를 판단한다(S302). 상기 터치스크린 장치는 사용자가 전원을 온(on)하면, 터치스크린의 격자 구조에 대하여 음성으로 출력한다. 예를들어, 격자구조가 3 by 4인 경우 상기 터치스크린 장치는 '3 by 4 탑메뉴'라는 안내 음성을 출력한다. 상기 사용자는 상기 안내 음성을 통해 스크린의 터치 영역과 그 규모를 인식하게 된다.Referring to FIG. 3, when a specific touch position contact is detected by a user (S300), the touch screen device determines whether the contact exceeds a predetermined time (S302). When the user turns on the power, the touch screen device outputs a voice to the grid structure of the touch screen. For example, when the grid structure is 3 by 4, the touch screen device outputs a guide voice of '3 by 4 top menu'. The user recognizes the touch area of the screen and its size through the guide voice.
상기 S302의 판단결과 일정 시간을 초과하였으면, 상기 터치스크린 장치는 상기 접촉된 터치포지션에 대응하는 객체를 음성으로 출력한다(S304).If it is determined in S302 that the predetermined time has been exceeded, the touch screen device outputs an object corresponding to the touched position by voice (S304).
그런 다음 상기 터치스크린 장치는 접촉해제없이 상기 특정 터치포지션으로부터 이웃한 다른 터치포지션으로의 이동이 감지되면(S306), 상기 이동한 터치포지션에 대응하는 객체를 즉시 음성으로 출력한다(S308).Then, when the touch screen device detects a movement from the specific touch position to another neighboring touch position without releasing contact (S306), an object corresponding to the moved touch position is immediately output as a voice (S308).
즉, 사용자가 상기 특정 터치포지션에서 손가락을 떼지 않은채 미리 정한 일정시간 초과한 후 이웃한 다른 터치포지션으로 이동하면, 상기 터치스크린 장치는 상기 손가락이 이동한 터치포지션에 대응하는 객체를 곧바로 음성으로 출력한다. 이는 이웃하는 터치포지션에서 손가락을 떼지 않았다는 정보가 수반되기 때문에 가능하다. 이때, 새롭게 위치한 객체를 실행하려면 사용자는 손가락을 뗀 후 같은 위치를 재접촉하고 미리 정해진 시간이 경과하기 전에 곧바로 손을 떼는 동작으로 실행한다.That is, if the user moves to another neighboring touch position after a predetermined period of time without releasing the finger from the specific touch position, the touch screen device immediately voices an object corresponding to the touch position to which the finger is moved. Output This is possible because it accompanies the information that the finger is not removed from the neighboring touch positions. At this time, in order to execute the newly located object, the user re-contacts the same position after releasing a finger and immediately releases a hand before a predetermined time elapses.
또한, 손가락으로 경유하고 있는 터치포지션이 빈 객체이면, 상기 터치스크린 장치는 곧바로 미리 정한 신호음을 출력하고, 원터치한 터치포지션이 빈 객체이면 곧바로 실행할 수 없음을 알리는 메시지를 출력한다. In addition, if the touch position via the finger is an empty object, the touch screen device immediately outputs a predetermined signal sound, and if the one-touch touch position is an empty object, it outputs a message indicating that it cannot be executed immediately.
상기 S308의 수행 후, 임의의 터치 포지션에서의 원터치가 감지되면(S310), 상기 터치스크린 장치는 상기 원터치된 터치포지션에 대응하는 객체를 실행한다(S312).After the execution of the step S308, if the one-touch in any touch position is detected (S310), the touch screen device executes the object corresponding to the one-touch touch position (S312).
만약, 상기 S302의 판단결과 일정 시간을 초과하지 않은 상태에서 접촉 해제가 감지되면(S314), 상기 터치스크린 장치는 상기 접촉해제된 터치포지션에 대응하는 객체를 실행한다(S316). 즉, 사용자가 상기 일정 시간을 경과하지 않고 곧바로 손가락을 떼면 상기 터치스크린 장치는 해당 객체를 실행하며 그 객체에 대한 음성출력은 하지 않는다. 예를 들면 A, B, C 객체가 있는데, 사용자가 A를 손가락으로 접촉한 후, 미리 정해진 일정 시간(예를 들면, 0.15초)이 초과하지 않은 상태에서 손가락을 떼면, 상기 터치스크린 장치는 음성출력을 하지 않고 A를 바로 실행시킨다. 이는 사용자가 그 객체의 위치를 알고 조작했음을 나타내므로 바로 실행하는 것이 타당하기 때문이다.If, as a result of the determination in S302, contact release is detected without exceeding a predetermined time (S314), the touch screen device executes an object corresponding to the released touch position (S316). That is, if the user immediately releases the finger without passing the predetermined time, the touch screen device executes the object and does not output the voice for the object. For example, there are A, B, and C objects. If the user touches A with his or her finger and then releases his finger without exceeding a predetermined time (for example, 0.15 second), the touch screen device generates a voice. Run A directly without output. This is because the user knows the location of the object and has manipulated it, so it is reasonable to execute it immediately.
상기와 같은 터치스크린 장치의 구동 방법에 대해 예를 들어 설명하면, 사용자가 터치스크린 장치의 전원을 온하면, 상기 터치스크린 장치는 '3 by 4 탑메뉴'라는 안내 음성을 출력한다. 상기 사용자는 상기 안내 음성을 듣는 순간 현재 메뉴 위치 뿐만 아니라 스크린의 터치 영역과 그 규모를 인식하게 된다. 이어서 상기 사용자가 터치스크린에 손가락을 접촉하고 좌우로 움직이면, 상기 터치스크린 장치는 '전자우편', '스카이프'하며 가리킨 곳을 음성으로 알려준다. 상기 '스카이프'라고 음성 출력한 곳에서 사용자가 원터치를 하면 '스카이프'가 실행된다. Referring to the driving method of the touch screen device as described above, for example, when the user turns on the power of the touch screen device, the touch screen device outputs a guide voice of '3 by 4 top menu'. The user recognizes the touch area and the size of the screen as well as the current menu position at the moment of hearing the guide voice. Subsequently, when the user touches a finger on the touch screen and moves left and right, the touch screen device voices an e-mail, a skype, and indicates a point. When the user touches at the voice output of 'skype', 'skype' is executed.
한편, 본 발명이 속하는 기술분야의 당업자라면, 본 발명의 본질적 기술 사상이나 필수적 특징을 변경하지 않고서도 다른 변형된 형태로 실시될 수 있음을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 당연히 예시적인 것으로 이해되어야 하고, 본 발명의 범위는 특허청구범위의 기재사항과 그 등가 개념으로부터 도출되는 모든 변경 또는 변형된 형태들을 포함하는 것으로 해석되어야 한다.On the other hand, it will be understood by those skilled in the art that the present invention may be implemented in other modified forms without changing the essential technical spirit or essential features of the present invention. Therefore, the above-described embodiments are to be understood as illustrative, and the scope of the present invention should be construed as including all changes or modifications derived from the description of the claims and their equivalents.

Claims (15)

  1. 크기가 같은 일정 개수의 격자가 서로 인접하게 형성되고, 각 격자에는 객체가 표시된 터치포지션이 배치되는 터치 입력부;A touch input unit in which a predetermined number of grids having the same size are formed adjacent to each other, and a touch position where an object is displayed is disposed on each grid;
    사용자에 의한 터치포지션 접촉이 감지되면, 상기 접촉된 터치포지션의 위치 정보를 포함하는 접촉 감지 신호를 생성하는 접촉 감지부;A touch sensing unit configured to generate a touch sensing signal including position information of the touch position when the touch position contact by the user is detected;
    상기 접촉 감지부로부터의 접촉 감지 신호를 근거로, 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동 여부 또는 원터치 실행 여부를 판단하고, 그 판단결과에 따라 각 터치포지션에 대응하는 객체를 음성으로 출력 또는 실행하도록 요청하는 출력 정보 판단부;Based on the touch detection signal from the touch sensing unit, it is determined whether to continuously move to another neighboring touch position or one-touch execution without release of contact, and according to the determination result, the object corresponding to each touch position is voiced. An output information determination unit for requesting output or execution;
    상기 출력 정보 판단부로부터의 음성 출력 요청에 따라 해당 객체를 음성 데이터로 변환하여 음성으로 출력되도록 하는 음성 합성 처리부; 및A speech synthesis processor for converting the object into speech data and outputting the speech in response to a voice output request from the output information determiner; And
    상기 출력 정보 판단부로부터의 객체 실행 요청에 따라 해당 객체를 실행하는 객체 실행부;An object executing unit executing the object in response to an object execution request from the output information determining unit;
    를 포함하는 시각 장애인을 위한 터치스크린 장치.Touch screen device for the visually impaired comprising a.
  2. 제1항에 있어서, The method of claim 1,
    전원 온(on)의 수행 시, 터치스크린의 배열(격자) 구조를 음성으로 안내하는 배열 구조 안내부를 더 포함하는 시각 장애인을 위한 정보 처리 장치.An information processing apparatus for the visually impaired, further comprising: an array structure guide unit for guiding the array (lattice) structure of the touch screen by voice when performing power on.
  3. 제1항에 있어서, The method of claim 1,
    상기 객체는 프로그램명, 기능명, 메뉴명, 리스트명, 파일명, 문장, 이미지, 동영상, 오디오 중 적어도 하나인 것을 특징으로 하는 시각 장애인을 위한 정보 처리 장치.And the object is at least one of a program name, a function name, a menu name, a list name, a file name, a sentence, an image, a video, and an audio.
  4. 제1항에 있어서,The method of claim 1,
    상기 접촉 감지부는 상기 접촉된 터치포지션에 대한 위치정보를 구하고, 상기 위치 정보가 포함된 접촉 감지 신호를 상기 출력 정보 판단부로 전송하는 것을 특징으로 하는 시각 장애인을 위한 정보 처리 장치.The touch sensing unit obtains position information on the touched position, and transmits a touch sensing signal including the position information to the output information determination unit.
  5. 제1항에 있어서, The method of claim 1,
    상기 출력 정보 판단부는, 상기 터치포지션에 대한 접촉 감지 신호가 미리 정해진 일정 시간을 초과하여 유지하는 경우, 상기 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부로 전송하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.The output information determination unit transmits a voice output request signal to the voice synthesis processor to output an object corresponding to the touch position as a voice when the touch detection signal for the touch position is maintained for a predetermined time or more. Touch screen device for the visually impaired, characterized in that.
  6. 제1항에 있어서,The method of claim 1,
    상기 출력 정보 판단부는 접촉해제없이 이웃한 다른 터치포지션으로의 연속적인 이동이 감지되면, 그 이동에 따라 접촉된 각 터치포지션에 대응하는 객체를 음성으로 출력하도록 하는 음성 출력 요청 신호를 상기 음성 합성 처리부로 전송하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.If the output information determining unit detects a continuous movement to another neighboring touch position without releasing the contact, the voice synthesis processing unit outputs a voice output request signal for outputting an object corresponding to each touch position touched according to the movement. Touch screen device for the blind, characterized in that for transmitting to.
  7. 제5항 또는 제6항에 있어서, The method according to claim 5 or 6,
    상기 음성 출력 요청 신호는 해당 터치포지션에 대응하는 객체 정보를 포함하는 것을 특징으로 하는 시각 장애인을 위한 정보 처리 장치.And the voice output request signal includes object information corresponding to the corresponding touch position.
  8. 제1항에 있어서, The method of claim 1,
    상기 출력 정보 판단부는, 상기 터치포지션에 대한 접촉 감지 신호가 미리 정해진 일정 시간을 초과하지 않은 경우, 상기 터치포지션에 대응하는 객체를 실행하도록 하는 객체 실행 요청 신호를 상기 객체 실행부로 전송하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치. The output information determination unit, when the touch detection signal for the touch position does not exceed a predetermined time, transmits an object execution request signal for executing the object corresponding to the touch position to the object execution unit. Touch screen device for the visually impaired.
  9. 제1항에 있어서, The method of claim 1,
    상기 출력 정보 판단부는 접촉해제없이 이웃한 다른 터치포지션으로 이동하는 중에 재차 원터치가 감지되면, 상기 원터치된 터치포지션에 대응하는 객체를 실행하도록 하는 객체 실행 요청 신호를 상기 객체 실행부로 전송하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.The output information determination unit transmits an object execution request signal for executing an object corresponding to the one touched position to the object execution unit when one touch is detected again while moving to another neighboring touch position without release of contact. Touch screen device for the visually impaired.
  10. 제1항에 있어서, The method of claim 1,
    상기 출력 정보 판단부는 상기 위치 정보가 서로 다른 터치포지션의 경계에 있는 경우, 접촉된 비율을 연산하여 높은 비율을 가진 터치포지션에 대응하는 객체를 인식하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.And the output information determination unit recognizes an object corresponding to a touch position having a high ratio by calculating a contact ratio when the location information is at a boundary between different touch positions.
  11. 제1항에 있어서, The method of claim 1,
    상기 터치 입력부의 격자는 미리 정해진 일정 규칙에 따라 일관되게 전개되고, 격자 간에 인접하여 무효 영역이 존재하지 않는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.The grid of the touch input unit is unfolded consistently according to a predetermined rule, the touch screen device for the visually impaired, characterized in that there is no invalid area adjacent to the grid.
  12. 제1항에 있어서, The method of claim 1,
    상기 음성 합성 처리부는 사용자에 의해 접촉된 터치포지션에 객체가 표시되어 있지 않으면, '비어있음'을 의미하는 음성이 출력되도록 하는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치.The speech synthesis processing unit is a touch screen device for the visually impaired, characterized in that if the object is not displayed in the touch position touched by the user, the voice means "empty".
  13. (a)사용자에 의한 특정 터치포지션 접촉이 감지되면, 상기 접촉이 미리 정해진 일정 시간을 초과하는지 판단하는 단계;(a) if a specific touch position contact is detected by a user, determining whether the contact exceeds a predetermined time;
    (b)상기 (a)단계의 판단결과 일정 시간을 초과하는 경우, 상기 접촉된 터치포지션에 대응하는 객체를 음성으로 출력하는 단계; 및(b) if the determination result of step (a) exceeds a predetermined time, outputting an object corresponding to the touched position by voice; And
    (c)접촉해제없이 상기 특정 터치포지션으로부터 이웃한 다른 터치포지션으로의 이동이 감지되면, 이동한 각 터치포지션에 대응하는 객체를 음성으로 출력하고, 재차 원터치가 감지된 경우, 상기 원터치된 터치포지션에 대응하는 객체를 실행하는 단계;(c) When a movement from the specific touch position to another neighboring touch position is detected without releasing contact, an object corresponding to each moved touch position is output as a voice, and when one touch is detected again, the one-touch touch position is detected. Executing an object corresponding to the;
    를 포함하는 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법.A user interface implementation method on a touch screen device for a visually impaired person comprising a.
  14. 제13항에 있어서, The method of claim 13,
    상기 (a)단계 이전에, 파워 온이 수행되면, 터치스크린의 격자 구조가 음성으로 출력되는 것을 특징으로 하는 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법.Before the step (a), when the power on is performed, the user interface implementation method in a touch screen device for the visually impaired, characterized in that the grid structure of the touch screen is output as a voice.
  15. 제13항에 있어서, The method of claim 13,
    상기 (a)단계의 판단결과 일정 시간을 초과하지 않은 상태에서 접촉 해제가 감지되면, 상기 접촉된 터치포지션에 대응하는 객체를 실행하는 단계를 더 포함하는 시각 장애인을 위한 터치스크린 장치에서 사용자 인터페이스 구현 방법.The user interface is implemented in the touch screen device for the visually impaired, further comprising executing an object corresponding to the touched position when the contact release is detected in a state in which the determination result of the step (a) does not exceed a predetermined time. Way.
PCT/KR2010/007753 2010-05-18 2010-11-04 Touch screen device and user interface for the visually impaired WO2011145788A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2010-0046599 2010-05-18
KR1020100046599A KR101017108B1 (en) 2010-05-18 2010-05-18 Touch screen apparatus for blind person and method for user interface on the apparatus

Publications (1)

Publication Number Publication Date
WO2011145788A1 true WO2011145788A1 (en) 2011-11-24

Family

ID=43777852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/007753 WO2011145788A1 (en) 2010-05-18 2010-11-04 Touch screen device and user interface for the visually impaired

Country Status (2)

Country Link
KR (1) KR101017108B1 (en)
WO (1) WO2011145788A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331190A (en) * 2014-11-27 2015-02-04 天津天地伟业数码科技有限公司 Method and device applied to calibration of touch screens of embedded devices
US10276148B2 (en) 2010-11-04 2019-04-30 Apple Inc. Assisted media presentation

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101329050B1 (en) * 2011-01-14 2013-11-14 (주)시누스 System and method for creating learning contents for children
KR101123212B1 (en) 2011-03-31 2012-03-20 (주) 에스엔아이솔라 Touch screen apparatus for blind person and method for inputting characters in the apparatus
CN102855791B (en) * 2012-09-19 2014-09-10 华南理工大学 Literacy auxiliary device for the blind and operating method of literacy auxiliary device for the blind
KR102229971B1 (en) * 2019-12-04 2021-03-18 건국대학교 글로컬산학협력단 Apparatus and method supporting provision of content information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100606406B1 (en) * 2005-03-11 2006-07-28 골든키정보통신 주식회사 Computer for a blind person
KR20070113022A (en) * 2006-05-24 2007-11-28 엘지전자 주식회사 Apparatus and operating method of touch screen responds to user input
KR20080080871A (en) * 2007-03-02 2008-09-05 (주)케이티에프테크놀로지스 Device and method of providing user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100606406B1 (en) * 2005-03-11 2006-07-28 골든키정보통신 주식회사 Computer for a blind person
KR20070113022A (en) * 2006-05-24 2007-11-28 엘지전자 주식회사 Apparatus and operating method of touch screen responds to user input
KR20080080871A (en) * 2007-03-02 2008-09-05 (주)케이티에프테크놀로지스 Device and method of providing user interface

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10276148B2 (en) 2010-11-04 2019-04-30 Apple Inc. Assisted media presentation
CN104331190A (en) * 2014-11-27 2015-02-04 天津天地伟业数码科技有限公司 Method and device applied to calibration of touch screens of embedded devices

Also Published As

Publication number Publication date
KR101017108B1 (en) 2011-02-25

Similar Documents

Publication Publication Date Title
WO2011145788A1 (en) Touch screen device and user interface for the visually impaired
WO2012064034A1 (en) Touch screen device allowing a blind person to handle objects thereon, and method of handling objects on the touch screen device
WO2014107005A1 (en) Mouse function provision method and terminal implementing the same
WO2013070024A1 (en) Method and apparatus for designating entire area using partial area touch in a portable equipment
WO2013048054A1 (en) Method of operating gesture based communication channel and portable terminal system for supporting the same
WO2012060589A2 (en) Touch control method and portable terminal supporting the same
WO2012033345A1 (en) Motion control touch screen method and apparatus
WO2014051201A1 (en) Portable device and control method thereof
WO2013172607A1 (en) Method of operating a display unit and a terminal supporting the same
WO2015099293A1 (en) Device and method for displaying user interface of virtual input device based on motion recognition
WO2012169730A2 (en) Method and apparatus for providing character input interface
WO2015174597A1 (en) Voice-controllable image display device and voice control method for image display device
WO2011132910A2 (en) Method and apparatus for interface
WO2012134210A2 (en) Touch screen apparatus for visually impaired people, and method for inputting characters to the apparatus
WO2018004140A1 (en) Electronic device and operating method therefor
WO2013125789A1 (en) Electronic apparatus, method for controlling the same, and computer-readable storage medium
WO2013118987A1 (en) Control method and apparatus of electronic device using control device
WO2011055998A2 (en) Method and medium for inputting korean characters for touch screen
WO2014148714A1 (en) Portable device and visual sensation detecting alarm control method thereof
WO2011087206A2 (en) Method for inputting korean character on touch screen
WO2013115493A1 (en) Method and apparatus for managing an application in a mobile electronic device
WO2016093448A1 (en) Mobile device and method for operating mobile device
WO2010110614A2 (en) Apparatus and method for controlling terminal
WO2015163500A1 (en) Electronic device set system including input-assisting device and electronic device for processing input using same
WO2012108724A2 (en) Map display changing device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10851834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10851834

Country of ref document: EP

Kind code of ref document: A1