WO2018112688A1 - 一种弱视辅助方法和装置 - Google Patents
一种弱视辅助方法和装置 Download PDFInfo
- Publication number
- WO2018112688A1 WO2018112688A1 PCT/CN2016/110706 CN2016110706W WO2018112688A1 WO 2018112688 A1 WO2018112688 A1 WO 2018112688A1 CN 2016110706 W CN2016110706 W CN 2016110706W WO 2018112688 A1 WO2018112688 A1 WO 2018112688A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- visual
- trigger request
- user operation
- amblyopia
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
Definitions
- the present invention relates to the field of vision assisting technology, and in particular, to a visual aid assisting method and apparatus.
- the problem of blindness and amblyopia is one of the serious social and public health problems in the world. More than 70% of human information is obtained through vision. Visual problems largely limit the access of information to blind and visually impaired people.
- a number of blinding devices have been developed for blind patients, such as guide canes, guide gloves, guide robots, and guide glasses.
- the guide blind device generally uses an ultrasonic wave, an infrared probe, an image sensor, a laser ranging method, and the like to measure an obstacle, and then performs an action indication to the user by means of voice or the like.
- amblyopia auxiliary glasses are also developed in the prior art for amblyopia patients, and the amblyopia auxiliary glasses include a lens body, an infrared camera, a general camera, and a microprocessor, and image information captured by the infrared camera and the ordinary camera during use. Combined, the image information is processed by the microprocessor, and finally the image information is projected onto the lens of the lens body in the form of a line drawing.
- Amblyopia patients have certain vision ability.
- the existing amblyopia assistive glasses are not flexible enough.
- the patient may want to observe a certain area in the field of view.
- the image of the prior art amblyopia auxiliary glasses projected on the lens of the lens body is likely to be possible. It is not an image of the area that the user wants to observe, which in turn leads to a low user experience.
- the embodiment of the present invention provides a method and a device for assisting the amblyopia, which is used to solve the problem that the user's autonomy is not considered in the process of assisting the amblyopia, and the user experience is not high.
- a method for assisting amblyopia comprising:
- the first-level trigger request includes Location information of the area that the user wants to observe;
- the visual picture is processed and displayed.
- a visual aid device comprising:
- a receiving unit configured to receive a first-level trigger request triggered by a user operation; the first-level trigger request includes location information of an area that the user wants to observe;
- a processing unit configured to acquire a visual image of an area that the user wants to observe according to the location information, and process the visual image
- a display unit configured to display the visual screen processed by the processing unit.
- a visual aid device comprising: a memory and a processor, the memory for storing computer executable code, the computer executing code for controlling the processor to perform the first aspect Amblyopia assisted method.
- a storage medium for storing computer software instructions for use in the Amblyopia Assist apparatus of the second aspect, comprising program code designed to perform the Amblyopia Assist method of the first aspect.
- a computer program product can be directly loaded into an internal memory of a computer and contains software code that can be implemented by a computer to implement the amblyopia assist method of the first aspect.
- the amblyopia assisting method provided by the embodiment of the present invention first receives a first-level trigger request triggered by a user operation, where the first-level trigger request includes location information of an area that the user wants to observe, and then acquires a visual image of the area that the user wants to observe according to the location information. Finally, the visual picture of the area that the user wants to observe is processed and displayed, because the first-level trigger request in the embodiment of the present invention includes the location information of the area that the user wants to observe, and is based on the location of the area that the user wants to observe.
- the information to obtain the visual picture, so the visual aid picture of the area that the user wants to observe can be obtained by the amblyopia assisting method provided by the embodiment of the present invention, that is, the visual acuity assisting process in the implementation of the present invention is performed based on the user's self-selection, and thus can Improve the user experience.
- FIG. 1 is a flow chart of steps of a method for assisting amblyopia according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of an area that a user wants to observe according to an embodiment of the present invention
- FIG. 3 is a second schematic diagram of an area that a user wants to observe according to an embodiment of the present invention.
- FIG. 4 is a third schematic diagram of an area that a user wants to observe according to an embodiment of the present invention.
- FIG. 5 is a second flowchart of steps of amblyopia assisting method according to an embodiment of the present invention.
- FIG. 6 is a third flowchart of a step of amblyopia assisting method according to an embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a visual aid device according to an embodiment of the present invention.
- FIG. 8 is a second schematic structural diagram of a visual aid device according to an embodiment of the present invention.
- the principle of the invention is: for the problem that the user experience is not high in the low-vision assistance process in the prior art, in the embodiment of the present invention, the location information of the area that the user wants to observe is carried by the first-level trigger request, and the user wants to obtain Observing the visual picture of the area and displaying the visual picture of the area that the user wants to observe, because the visual acuity is obtained based on the user's self-selection during the amblyopia assisting process, so that the user experience in the prior art is not high.
- the location information of the area that the user wants to observe is carried by the first-level trigger request, and the user wants to obtain Observing the visual picture of the area and displaying the visual picture of the area that the user wants to observe, because the visual acuity is obtained based on the user's self-selection during the amblyopia assisting process, so that the user experience in the prior art is not high.
- the execution subject of the amblyopia assisting method provided by the embodiment of the present invention may be a visual aid auxiliary device or a terminal device.
- the visual aid device may be a combination of a central processing unit (CPU), a CPU and a memory in the terminal device, or may be another unit or module in the terminal device.
- the terminal device may specifically be a smart phone, an augmented reality glasses (English name: Augmented Reality, referred to as: AR glasses), a portable computer, a pocket computer, a handheld computer, a digital photo frame, a palmtop computer, a navigator, and the like.
- an embodiment of the present invention provides a method for assisting amblyopia.
- the method for assisting amblyopia includes the following steps:
- the first level trigger request includes location information of an area that the user wants to observe.
- the user operations in the foregoing embodiment include: a gesture, a head motion, and a language At least one of a tone input, a key input, and a combination thereof.
- a user operation is taken as a gesture to provide a possible implementation method for implementing the first-level trigger request triggered by the user operation in the above step S11.
- the method includes:
- the preset gesture may be a finger pointing to an area or the like.
- a certain range of 20 may be divided into n areas centering on the location of the user (in the figure, 12 areas, such as A, B, C, ..., etc., are taken as an example.
- the location information of the area that the user wants to observe included in the level 1 trigger request may specifically be a label corresponding to any of the areas. For example, when the location information included in the first-level trigger request is A, the area that the user wants to observe is the A area. When the location information included in the first-level trigger request is C, the area that the user wants to observe is C. region.
- the location information in the first-level trigger request may also be a coordinate point (31, 32), and the area that the user wants to observe is an area centered on the coordinate point (31, 32). (M, N).
- the area that the user wants to observe is an example of a circle and a rectangle, respectively.
- the embodiment of the present invention is not limited thereto, and the area that the user wants to observe may also be one level.
- the other shape in which the coordinate point in the trigger request is centered for example, a pentagon centered on the coordinate point in the first-level trigger request, a triangle centered on the coordinate point in the first-level trigger request, and a request in the first-level trigger request
- the coordinate points are centered in irregular shapes and the like. That is, the embodiment of the present invention provides a possible form of the location information in the first-level triggering request, but the shape, size, and the like of the area that the user wants to observe are not limited in the embodiment of the present invention.
- the location information in the first-level trigger request may also be a central direction (41, 42).
- the area that the user wants to observe is centered on the central direction (41, 42). Offset the area formed by a certain angle (O, P).
- the embodiment of the present invention provides a possible form of position information in the first-level trigger request.
- the angle of the center direction of the first-level trigger request is offset to the two sides. Make a limit.
- the area that the user wants to observe can be level one.
- the angle at which the center direction in the trigger request is offset to both sides may be an area formed by 30°, 60°, or the like.
- a visual picture of an area that the user wants to observe may be acquired by one or more of ultrasonic detection, infrared detection, image sensor, laser ranging, and the like.
- any other manner of acquiring a visual picture in the prior art may be used to obtain a visual picture of an area that the user wants to observe, that is, in the embodiment of the present invention, the area that the user wants to observe is acquired.
- the way of visual images is not limited.
- processing the visual image may include: magnifying the visual image, adjusting the contrast of the visual image, adjusting the brightness of the visual image, converting the adjusted visual image into a line drawing, and performing at least color conversion on the object in the visual image.
- processing the visual image may include: magnifying the visual image, adjusting the contrast of the visual image, adjusting the brightness of the visual image, converting the adjusted visual image into a line drawing, and performing at least color conversion on the object in the visual image.
- how to process the visual picture to make the visual picture clearer is subject to the user's observation.
- the visual display is displayed to display the processed visual image, and the specific display manner may be selected based on the execution subject of the visual acuity assisting method provided by the embodiment of the present invention.
- the execution subject of the amblyopia assisting method provided by the above embodiment is a mobile phone
- displaying the visual picture may be displaying the visual picture through the mobile phone screen
- the display of the visual picture can be performed by projecting the visual picture onto the lens of the AR glasses.
- the amblyopia assisting method further includes:
- the magnification ratio of the visual picture is adjusted according to the adjustment command.
- the user operation triggering the adjustment instruction request includes at least one of a gesture, a head motion, a voice input, and a key input, and a combination thereof.
- the zoom ratio of the visual picture can be more suitable for the user to view, so that the user can view the visual picture more clearly.
- the amblyopia assisting method provided by the embodiment of the present invention first receives a first-level trigger request triggered by a user operation, where the first-level trigger request includes location information of an area that the user wants to observe, and then acquires a visual image of the area that the user wants to observe according to the location information. Finally, the visual picture of the area that the user wants to observe is processed and displayed, because the first-level trigger request in the embodiment of the present invention includes the location information of the area that the user wants to observe, and is based on the location of the area that the user wants to observe.
- the information to obtain the visual picture, so the visual aid picture of the area that the user wants to observe can be obtained by the amblyopia assisting method provided by the embodiment of the present invention, that is, the visual acuity assisting process in the implementation of the present invention is performed based on the user's self-selection, and thus can Improve the household experience.
- the embodiment of the present invention further provides a method for assisting the amblyopia. Specifically, after the processing and displaying the visual image in the step S13, the method for assisting the amblyopia according to the embodiment further includes:
- the secondary trigger request can be triggered by a certain operation.
- the user operation that triggers the secondary trigger request includes at least one of a gesture, a head motion, a voice input, a key input, and a combination thereof.
- the user operation that triggers the second-level trigger request may be the same as the user operation that triggers the level one trigger request, or may be different from the user operation that triggers the level one trigger request.
- the user operation that triggers the first-level trigger request and the user operation that triggers the second-level trigger request are all the same gesture; for example, the user operation that triggers the first-level trigger request is a specific voice input, and the second-level trigger request is triggered.
- User operation is a key press In.
- intelligently identifying a visual image may include intelligently identifying obstacles, traffic lights, characters, faces, scenes, and the like in the visual image.
- the above-mentioned intelligent recognition result output sound prompt signal and/or tactile prompt signal includes the following three specific implementation manners:
- the sound signal is output according to the intelligent recognition result.
- the voice prompt signal may be a voice, a beep, or the like
- the tactile alert signal may be vibration, pressure, or the like.
- the above embodiment further receives the second-level trigger request triggered by the user operation, and intelligently recognizes the visual picture according to the second-level trigger request, and finally outputs the sound prompt signal and/or the tactile sense according to the intelligent recognition result.
- the prompt signal so that after the visual picture is processed and displayed and the user still cannot see the visual picture of the area to be observed, the user is reminded by the voice prompt signal and/or the tactile prompt signal, thereby further helping the user to understand
- the environment in which it is located improves the user experience.
- the embodiment shown in FIG. 2 can intelligently identify the visual picture according to the secondary trigger request, and then output the sound prompt signal and/or the tactile prompt signal according to the smart recognition result to further help the user understand the environmental condition, but when When the environment in which the user is located is complicated, there may be a problem that the intelligent recognition result is inaccurate, and the user cannot be accurately alerted by the voice prompt signal and/or the tactile alert signal.
- the embodiment of the present invention further provides a method for assisting the amblyopia. Specifically, referring to FIG. 6, after the voice prompt signal and/or the haptic alert signal are output according to the smart recognition result in the foregoing step S16, the foregoing embodiment
- the amblyopia assist methods provided include:
- user actions that trigger a three-level trigger request include: gestures, head movements, At least one of voice input, key input, and a combination thereof.
- the user operation for triggering the three-level trigger request may be the same as the user operation for triggering the level one trigger request or the user operation for triggering the level two trigger request, or the user operation for triggering the level one trigger request and the trigger second.
- the user of the level trigger request is different.
- the user operation that triggers the first-level trigger request, the user operation that triggers the second-level trigger request, and the user operation that triggers the three-level trigger request are all the same gestures; for example, the user operation that triggers the first-level trigger request is a voice input.
- the user operation that triggers the secondary trigger request is a key input, and the user operation that triggers the three-level trigger request is a head rotation action.
- the remote service device may be a human service platform specially established for providing services to users, or may be a communication device in the user's home.
- the remote service device is a manual service platform
- the customer service personnel of the manual service platform can remind the user by watching the received visual image and communicating with the user through voice.
- the remote service device is a communication device in the user's home
- the user's family can remind the user by watching the received visual picture and communicating with the user by voice.
- the remote service device can also be other types of devices.
- the specific form of the remote service device is not limited in the embodiment of the present invention.
- the above embodiment After outputting the voice prompt signal and/or the haptic alert signal according to the smart recognition result, the above embodiment further receives a three-level trigger request triggered by the user operation, and sends the visual image to the remote service device and the remote service device according to the three-level trigger request. Establishing a voice connection, and finally receiving a voice prompt signal sent by the remote service device, so that after the voice prompt signal and/or the tactile alert signal is output according to the smart recognition result, and the user still cannot understand the environmental condition, the remote service device is established.
- the connection, and thus the remote service is provided to the user, so the above embodiment can further help the user to understand the environment and improve the user experience.
- FIG. 7 shows a possible structural diagram of the amblyopia assisting device involved in the above embodiment.
- the visual acuity assisting device 700 includes:
- the receiving unit 71 is configured to receive a first-level trigger request triggered by a user operation.
- the first-level trigger request includes location information of an area that the user wants to observe;
- the processing unit 72 is configured to acquire a visual image of the area that the user wants to observe according to the location information and process the visual picture.
- the display unit 73 is configured to display the visual screen processed by the processing unit.
- the visual acuity auxiliary device receives a first-level triggering request triggered by a user operation by the receiving unit, where the first-level triggering request includes the location information of the area that the user wants to observe, and the processing unit acquires the user's desired observation according to the location information.
- the visual image of the region is processed and the visual image is processed, and the display unit displays the visual image processed by the processing unit.
- the first-level trigger request in the embodiment of the present invention includes the location information of the region that the user wants to observe, and is based on the user. The position information of the area to be observed is used to obtain the visual picture.
- the visual aid picture of the area that the user wants to observe can be obtained by the amblyopia assisting method provided by the embodiment of the present invention, that is, the implementation of the present invention is based on the user in the process of assisting the amblyopia. Self-selected, so you can improve the household experience.
- the visual acuity auxiliary device further includes: an intelligent identification unit and a prompt signal output unit;
- the receiving unit is further configured to receive a secondary trigger request triggered by a user operation
- the intelligent recognition unit is used for intelligently recognizing a visual picture
- the cue signal output unit is configured to output a sound cue signal and/or a tactile cue signal according to the smart recognition result.
- the visual acuity auxiliary device further includes: a communication unit;
- the receiving unit is further configured to receive a three-level trigger request triggered by a user operation
- the communication unit is configured to send the visual picture to the remote service device and establish a voice connection with the remote service device;
- the communication unit is further configured to receive a voice prompt signal sent by the remote service device.
- the receiving unit is further configured to receive an adjustment instruction triggered by a user operation
- the processing unit is further configured to adjust the magnification ratio of the visual picture according to the adjustment instruction.
- the user operation includes at least one of a gesture, a head motion, a voice input, and a key input, and a combination thereof.
- each amblyopia auxiliary device such as a mobile terminal, VR glasses, etc.
- each amblyopia auxiliary device in order to implement the above functions, includes hardware structures and/or software modules corresponding to the execution of the respective functions.
- the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
- the embodiment of the present invention may divide a function module into a mobile terminal, a VR glasses, or the like according to the foregoing method example.
- each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
- FIG. 7 is a schematic diagram showing a possible structure of the amblyopia assisting device involved in the above embodiment.
- the visual acuity assisting device includes: a receiving unit, a processing unit, and a display unit.
- the receiving unit is configured to support the visual acuity assisting device to perform step S11 in FIGS. 1, 5, and 6;
- the processing unit is configured to support the visual acuity assisting device to perform the processing of the visual picture in steps S12 and S13 in FIGS. 1, 5, and 6;
- the unit is for supporting the visual acuity assisting device to perform the process of displaying the processed visual picture in step S13 in FIGS. 1, 5, and 6. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
- the receiving module in the above embodiment may be an infrared sensing device, an acceleration sensor, a gravity sensor, or the like.
- the processing module 72 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit. , ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
- the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
- the display module 73 may be a liquid crystal display, an organic electroluminescence display, a laser projection display device, or the like.
- the visual aid device 800 includes a processor 81, a memory 82 display 83, and a bus 84.
- the processor 81 and the memory 82 display 83 are connected to each other through a bus 84.
- the bus 84 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
- PCI Peripheral Component Interconnect
- EISA Extended Industry Standard Architecture
- the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 8, but it does not mean that there is only one bus or one type of bus.
- the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
- the embodiment of the present invention further provides a storage medium, which may include a memory 82 for storing computer software instructions for use as a visual aid device, including program code designed to execute the visual acuity assisting method.
- the software instructions may be composed of corresponding software modules, and the software modules may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), and an erasable programmable only.
- EPROM Erasable Programmable ROM
- EEPROM Electrically Erasable Programmable Read Only Memory
- register Hard Disk, Mobile Hard Disk, CD-ROM, or any other form of storage well known in the art.
- An exemplary storage medium coupling The processor is coupled to the processor to enable reading of information from the storage medium and to write information to the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
- the processor and the storage medium may also exist as discrete components in the core network interface device.
- the embodiment of the invention further provides a computer program product, which can be directly loaded into the memory 82 and contains software code, and the computer program can be loaded and executed by the computer to implement the above-mentioned amblyopia assisting method.
- the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
- the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
- Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
- a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种弱视辅助方法和装置,涉及视力辅助技术领域,用于解决弱视辅助过程中用户体验不高的问题。该方法包括:接收用户操作触发的一级触发请求(S11);所述一级触发请求包括用户想要观察的区域的位置信息;根据所述位置信息获取所述用户想要观察的区域的视觉画面(S12);对所述视觉画面进行处理并显示(S13)。
Description
本发明涉及视力辅助技术领域,尤其涉及一种弱视辅助方法和装置。
盲和弱视问题是世界上严重的社会和公共卫生问题之一。人类70%以上的信息都是通过视觉来获取的,视力问题很大程度上限制了盲人和弱视的人进行信息获取。
目前针对全盲患者开发出了许多的导盲设备,例如:导盲手杖、导盲手套、导盲机器人、导盲眼镜等。导盲设备一般采用超声波、红外线探头、图像传感器、激光测距等方式对障碍物进行测距,然后通过语音等方式对用户进行行动指示。此外,针对弱视患者现有技术中也开发出了一种弱视辅助眼镜,该弱视辅助眼镜包括眼镜本体、红外摄像头、普通摄像头以及微处理器,使用过程中将红外摄像头和普通摄像头捕捉的图像信息结合在一起,然后由微处理器对图像信息进行处理,最后将图像信息以线条画的形式投射在眼镜本体的镜片上。
弱视患者具有一定的视力能力,现有弱视辅助眼镜灵活度不足,患者可能想要观察视野中的某一区域,而现有技术中的弱视辅助眼镜增投射在眼镜本体的镜片上的图像很可能并不是用户想要观察的区域的图像,进而导致用户体验不高。
发明内容
本发明的实施例提供一种弱视辅助方法和装置,用于解决弱视辅助过程中没有考虑用户的自主性,进而导致用户体验不高的问题。
为达到上述目的,本发明的实施例采用如下技术方案:
第一方面,提供一种弱视辅助方法,包括:
接收用户操作触发的一级触发请求;所述一级触发请求包括用
户想要观察的区域的位置信息;
根据所述位置信息获取所述用户想要观察的区域的视觉画面;
对所述视觉画面进行处理并显示。
第二方面,提供一种弱视辅助装置,包括:
接收单元,用于接收用户操作触发的一级触发请求;所述一级触发请求包括用户想要观察的区域的位置信息;
处理单元,用于根据所述位置信息获取用户想要观察的区域的视觉画面并对所述视觉画面进行处理;
显示单元,用于对所述处理单元处理后的视觉画面进行显示。
第三方面,提供一种弱视辅助装置,所述装置包括:存储器和处理器,所述存储器用于存储计算机执行代码,所述计算机执行代码用于控制所述处理器执行第一方面所述的弱视辅助方法。
第四方面,提供一种存储介质,用于储存为第二方面所述的弱视辅助装置所用的计算机软件指令,其包含执行第一方面所述的弱视辅助方法所设计的程序代码。
第五方面,提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现第一方面所述的弱视辅助方法。
本发明实施例提供的弱视辅助方法首先接收用户操作触发的一级触发请求,其中一级触发请求包括用户想要观察的区域的位置信息,然后根据位置信息获取用户想要观察的区域的视觉画面,最后对用户想要观察的区域的视觉画面进行处理并显示,由于本发明实施例中的一级触发请求包括用户想要观察的区域的位置信息,且是根据用户想要观察的区域的位置信息来获取视觉画面的,所以通过本发明实施例提供的弱视辅助方法可以获取用户想要观察的区域的视觉画面,即本发明实施中的弱视辅助过程是基于用户的自主选择进行的,因此可以提高用户体验。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下
面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明的实施例提供的弱视辅助方法的步骤流程图之一;
图2为本发明的实施例提供的用户想要观察的区域的示意图之一;
图3为本发明的实施例提供的用户想要观察的区域的示意图之二;
图4为本发明的实施例提供的用户想要观察的区域的示意图之三;
图5为本发明的实施例提供的弱视辅助方法的步骤流程图之二;
图6为本发明的实施例提供的弱视辅助方法的步骤流程图之三;
图7为本发明的实施例提供的弱视辅助装置的示意性结构图之一;
图8为本发明的实施例提供的弱视辅助装置的示意性结构图之二。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。如果不加说明,本文中的“多个”是指两个或两个以上。
需要说明的是,本发明实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本发明实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者
“例如”等词旨在以具体方式呈现相关概念。
需要说明的是,本发明实施例中,除非另有说明,“多个”的含义是指两个或两个以上。
需要说明的是,本发明实施例中,“的(英文:of)”,“相应的(英文:corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
下面将结合本发明实施例的说明书附图,对本发明实施例提供的技术方案进行说明。显然,所描述的是本发明的一部分实施例,而不是全部的实施例。需要说明的是,下文所提供的任意多个技术方案中的部分或全部技术特征在不冲突的情况下,可以结合使用,形成新的技术方案。
本发明的发明原理为:针对现有技术中弱视辅助过程中户体验不高的问题,本发明实施例中通过一级触发请求中携带的用户想要观察的区域的位置信息,获取用户想要观察的区域的视觉画面,并将用户想要观察的区域的视觉画面处理后显示出来,因为弱视辅助过程中是基于用户的自主选择获取视觉画面的,所以解决了现有技术中用户体验不高的问题。
本发明实施例提供的弱视辅助方法的执行主体可以为弱视辅助装置或者终端设备。其中,弱视辅助装置可以为上述终端设备中的中央处理器(Central Processing Unit,CPU)、CPU与存储器等硬件的组合、或者可以为上述终端设备中的其他单元或者模块。终端设备具体可以为智能手机、增强现实眼镜(英文名称:Augmented Reality,简称:AR眼镜)、便携式计算机、袖珍式计算机、手持式计算机、数码相框、掌上电脑、导航仪等。
基于上述内容,本发明的实施例提供一种弱视辅助方法,具体的参照图1所示,该弱视辅助方法包括如下步骤:
S11、接收用户操作触发的一级触发请求。
其中,一级触发请求包括用户想要观察的区域的位置信息。
可选的,上述实施例中的用户操作包括:手势、头部动作、语
音输入、按键输入中的至少一种及其组合。
以下以用户操作为手势为例提供一种实现上述步骤S11中接收用户操作触发的一级触发请求的可能的实现方法。该方法包括:
a、实时检测用户的手势。
b、判断用户的手势是否为预设手势。
c、当用户的手势为预设手势时,触发一级触发请求。
示例性的,预设手势可以为手指指向某一区域等。
可选的,参照图2所示,上述实施例中可以以用户所在位置为中心将一定范围20划分为n个区域(图中以划分为A、B、C……等12个区域为例说明),一级触发请求中包括的用户想要观察的区域的位置信息具体可以为任一任意区域对应的标号。例如:当一级触发请求中包括的位置信息为A时,表示用户想要观察的区域为A区域,当一级触发请求中包括的位置信息为C时,表示用户想要观察的区域为C区域。
可选的,参照图3所示,一级触发请求中的位置信息还可以为一个坐标点(31、32),用户想要观察的区域为以该坐标点(31、32)为中心的区域(M、N)。需要说明的是,图3中以用户想要观察的区域为分别为圆形和矩形为例进行说明,但本发明实施例并不限定于此,用户想要观察的区域还可以为以一级触发请求中的坐标点为中心的其他形状,例如:以一级触发请求中的坐标点为中心的五边形、以一级触发请求中的坐标点为中心的三角形以及以一级触发请求中的坐标点为中心的不规则形状等。即,本发明实施例提供了一种一级触发请求中的位置信息可能的形式,但本发明实施例中对用户想要观察的区域的形状、大小等不做限定。
可选的,参照图4所示,一级触发请求中的位置信息还可以为一个中心方向(41、42)用户想要观察的区域为以该中心方向(41、42)为中心向两侧偏移一定角度形成的区域(O、P)。同样,需要说明的是,本发明实施例提供了一种一级触发请求中的位置信息可能的形式,但本发明实施例中对一级触发请求中的中心方向向两侧偏移的角度不做限定。示例性的,用户想要观察的区域可以为一级
触发请求中的中心方向向两侧偏移的角度可以为30°、60°等形成的区域。
S12、根据位置信息获取用户想要观察的区域的视觉画面。
具体的,本发明实施例中可以通过超声波检测、红外线检测、图像传感器、激光测距等中的一种或多种相互配合来获取用户想要观察的区域的视觉画面。此外,本发明实施例中也可以采用现有技术中任一种其他获取视觉画面的方式来获取用户想要观察的区域的视觉画面,即本发明实施例中对获取用户想要观察的区域的视觉画面的方式不作限定。
S13、对视觉画面进行处理并显示。
具体的,对视觉画面进行处理可以包括:对视觉画面进行放大、调整视觉画面的对比度、调节视觉画面的亮度、将调节视觉画面转换为线条画、对视觉画面中的物体进行色彩转换等的至少一种及其组合。本发明实施例中不限定如何对视觉画面进行处理以能够使视觉画面更清楚的被用户观察为准。
进一步的,上述对视觉画面进行显示为对处理后的视觉画面进行显示,具体显示方式可以基于本发明实施例提供的弱视辅助方法的执行主体来选择。例如:当上述实施例提供的弱视辅助方法的执行主体为手机时,对视觉画面进行显示可以为通过手机屏幕对视觉画面进行显示,再例如:当上述实施例提供的弱视辅助方法的执行主体为AR眼镜时,对视觉画面进行显示可以为将视觉画面投影在AR眼镜的镜片上进行显示。
可选的,在上述步骤S13中对视觉画面进行处理并显示后,上述弱视辅助方法还包括:
接收用户操作触发的调节指令。
根据调节指令对视觉画面的放大比例进行调节。
其中,触发调节指令请求的用户操作包括:手势、头部动作、语音输入、按键输入中的至少一种及其组合。
通过用户操作触发的调节指令对视觉画面的放大比例进行调
节,可以使视觉画面的放大比例更加适合用户进行观看,因此有利用户更清楚的对视觉画面进行观看。
本发明实施例提供的弱视辅助方法首先接收用户操作触发的一级触发请求,其中一级触发请求包括用户想要观察的区域的位置信息,然后根据位置信息获取用户想要观察的区域的视觉画面,最后对用户想要观察的区域的视觉画面进行处理并显示,由于本发明实施例中的一级触发请求包括用户想要观察的区域的位置信息,且是根据用户想要观察的区域的位置信息来获取视觉画面的,所以通过本发明实施例提供的弱视辅助方法可以获取用户想要观察的区域的视觉画面,即本发明实施中的弱视辅助过程是基于用户的自主选择进行的,因此可以提高户体验。
上述实施例可以将用户想要观察的区域的视觉画面显示出来,进而用户可以通过观看视觉画面来了解所处的环境状况,但是由于用户想要观察的区域的视觉画面的复杂程度可能不同,不同用户的视力状况也可能不同,所以可能仍然存在用户无法看清楚想要观察的区域的视觉画面的问题。基于上述问题,本发明的实施例进一步提供了一种弱视辅助方法,具体的,参照图5所示在上述步骤S13对视觉画面进行处理并显示之后,上述实施例提供的弱视辅助方法还包括:
S14、接收用户操作触发的二级触发请求。
即,在对视觉画面进行处理并显示之后,若用户感觉仍然无法看清楚显示出来的视觉画面则可以通过一定操作来触发二级触发请求。
触发二级触发请求的用户操作包括:手势、头部动作、语音输入、按键输入中的至少一种及其组合。
需要说明的是,触发二级触发请求的用户操作可以与触发一级触发请求的用户操作相同,也可以与触发一级触发请求的用户操作不同。例如:触发一级触发请求的用户操作和触发二级触发请求的用户操作均为某一相同的手势;再例如触发一级触发请求的用户操作为一特定语音输入,而触发二级触发请求的用户操作为一按键输
入。
S15、根据二级触发请求对视觉画面进行智能识别。
示例性的,对视觉画面进行智能识别可以包括:对视觉画面中的障碍物、交通指示灯、文字、人脸、场景等进行智能识别。
S16、根据智能识别结果输出声音和/或触觉提示信号。
上述智能识别结果输出声音提示信号和/或触觉提示信号包括如下三种具体实现方式:
一、根据智能识别结果输出声音信号。
二、根据智能识别结果输出触觉提示信号。
三、根据智能识别结果输出声音信号和触觉提示信号。
示例性的,声音提示信号具体可以为语音、嗡鸣等,触觉提示信号可以为振动、压力等。
上述实施例在对视觉画面进行处理并显示之后,进一步接收用户操作触发的二级触发请求,且根据二级触发请求对视觉画面进行智能识别,最后根据智能识别结果输出声音提示信号和/或触觉提示信号,因此可以在对视觉画面进行处理并显示之后且用户仍无法看清楚想要观察的区域的视觉画面时,通过声音提示信号和/或触觉提示信号对用户进行提醒,进而进一步帮助用户了解所处的环境状况,提高用户体验。
进一步的,上述图2所示实施例可以根据二级触发请求对视觉画面进行智能识别,然后根据智能识别结果输出声音提示信号和/或触觉提示信号进一步帮助用户了解所处的环境状况,但是当用户所处的环境状况较复杂时,可能存在智能识别结果不准确、通过声音提示信号和/或触觉提示信号不能准确对用户进行提醒的问题。基于上述问题,本发明的实施例进一步提供了一种弱视辅助方法,具体的,参照图6所示,在上述步骤S16根据智能识别结果输出声音提示信号和/或触觉提示信号之后,上述实施例提供的弱视辅助方法还包括:
S17、接收用户操作触发的三级触发请求。
同样,触发三级触发请求的用户操作包括:手势、头部动作、
语音输入、按键输入中的至少一种及其组合。
需要说明的是,触发三级触发请求的用户操作可以与触发一级触发请求的用户操作或者与触发二级触发请求的用户操作相同,也可以与触发一级触发请求的用户操作以及与触发二级触发请求的用户操均不同。例如:触发一级触发请求的用户操作、触发二级触发请求的用户操作以及触发三级触发请求的用户操作均为某一相同的手势;再例如触发一级触发请求的用户操作为一语音输入,触发二级触发请求的用户操作为一按键输入,触发三级触发请求的用户操作为头部转动动作。
S18、根据三级触发请求将视觉画面发送至远程服务设备并与远程服务设备建立语音连接。
示例性的,远程服务设备可以为了向用户提供服务而专门建立的人工服务平台,也可为用户家中的通信设备。当远程服务设备为人工服务平台时,人工服务平台的客服人员可以通过观看接收到的视觉画面,并通过语音方式与用户进行交流,进而对用户进行提醒。当远程服务设备为用户家中的通信设备时,用户的家人可以通过观看接收到的视觉画面,并通过语音方式与用户进行交流,进而对用户进行提醒。当然,远程服务设备还可以为其他类型的设备,本发明实施例中对远程服务设备的具体形式不做限定。
S19、接收远程服务设备发送的语音提示信号。
上述实施例在根据智能识别结果输出声音提示信号和/或触觉提示信号之后,进一步接收用户操作触发的三级触发请求,且根据三级触发请求将视觉画面发送至远程服务设备并与远程服务设备建立语音连接,最后接收远程服务设备发送的语音提示信号,因此可以在根据智能识别结果输出声音提示信号和/或触觉提示信号之后且用户仍无法了解所处环境状况时,通过与远程服务设备建立连接,进而向用户提供远程服务,因此上述实施例可以进一步帮助用户了解所处的环境状况,提高用户体验。
下面说明本发明实施例提供的与上文所提供的方法实施例相
对应的装置实施例。需要说明的是,下述装置实施例中相关内容的解释,均可以参考上述方法实施例。
在采用对应各个功能划分各个功能模块的情况下,图7示出了上述实施例中所涉及的弱视辅助装置的一种可能的结构示意图。弱视辅助装置700包括:
接收单元71,用于接收用户操作触发的一级触发请求。
其中,一级触发请求包括用户想要观察的区域的位置信息;
处理单元72,用于根据位置信息获取用户想要观察的区域的视觉画面并对视觉画面进行处理。
显示单元73,用于对处理单元处理后的视觉画面进行显示。
本发明实施例提供的弱视辅助装置通过接收单元接收用户操作触发的一级触发请求,其中一级触发请求包括用户想要观察的区域的位置信息,通过处理单元根据位置信息获取用户想要观察的区域的视觉画面并对视觉画面进行处理,显示单元对处理单元处理后的视觉画面进行显示,由于本发明实施例中的一级触发请求包括用户想要观察的区域的位置信息,且是根据用户想要观察的区域的位置信息来获取视觉画面的,所以通过本发明实施例提供的弱视辅助方法可以获取用户想要观察的区域的视觉画面,即本发明实施在弱视辅助过程中是基于用户的自主选择进行的,因此可以提高户体验。
可选的,弱视辅助装置还包括:智能识别单元和提示信号输出单元;
接收单元还用于接收用户操作触发的二级触发请求;
智能识别单元用于对视觉画面进行智能识别;
提示信号输出单元用于根据智能识别结果输出声音提示信号和/或触觉提示信号。
可选的,弱视辅助装置还包括:通信单元;
接收单元还用于接收用户操作触发的三级触发请求;
通信单元用于将视觉画面发送至远程服务设备并与远程服务设备建立语音连接;
通信单元还用于接收远程服务设备发送的语音提示信号。
可选的,接收单元还用于接收用户操作触发的调节指令;
处理单元还用于根据调节指令对视觉画面的放大比例进行调节。
可选的,用户操作包括:手势、头部动作、语音输入、按键输入中的至少一种及其组合。
上述主要从各个功能单元之间交互的角度对本发明实施例提供的方案进行了介绍。可以理解的是各弱视辅助装置,例如移动终端、VR眼镜等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据上述方法示例对移动终端、VR眼镜等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图7示出了上述实施例中所涉及的弱视辅助装置的一种可能的结构示意图,弱视辅助装置包括:接收单元,处理单元,显示单元。接收单元用于支持弱视辅助装置执行图1、5、6中的步骤S11;处理单元用于支持弱视辅助装置执行图1、5、6中的步骤S12以及步骤S13对视觉画面的处理过程;显示
单元用于支持弱视辅助装置执行图1、5、6中的步骤S13中对处理后的视觉画面进行显示的过程。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
此外,上述实施例中的接受模块可以为红外感应装置、加速度传感器、重力传感器等。处理模块72可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。显示模块73可以是液晶显示屏、有机电致发光显示屏、激光投影显示装置等。
参阅图8所示,该弱视辅助装置800包括:处理器81、存储器82显示器83以及总线84。其中,处理器81、存储器82显示器83通过总线84相互连接;总线84可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
结合本发明公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。本发明实施例还提供一种存储介质,该存储介质可以包括存储器82,用于储存为弱视辅助装置所用的计算机软件指令,其包含执行弱视辅助方法所设计的程序代码。具体的,软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦
合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
本发明实施例还提供一种计算机程序产品,该计算机程序可直接加载到存储器82中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现上述的弱视辅助方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。
Claims (13)
- 一种弱视辅助方法,其特征在于,包括:接收用户操作触发的一级触发请求;所述一级触发请求包括所述用户想要观察的区域的位置信息;根据所述位置信息获取所述用户想要观察的区域的视觉画面;对所述视觉画面进行处理并显示。
- 根据权利要求1所述的方法,其特征在于,在对所述视觉画面进行处理并显示后,所述方法还包括:接收用户操作触发的二级触发请求;根据所述二级触发请求对所述视觉画面进行智能识别;根据智能识别结果输出声音提示信号和/或触觉提示信号。
- 根据权利要求2所述的方法,其特征在于,在根据智能识别结果输出语音和/或触觉提示信号后,所述方法还包括:接收用户操作触发的三级触发请求;根据所述三级触发请求将所述视觉画面发送至远程服务设备并与所述远程服务设备建立语音连接;接收所述远程服务设备发送的语音提示信号。
- 根据权利要求1所述的方法,其特征在于,对所述视觉画面进行处理并显示后,所述方法还包括:接收用户操作触发的调节指令;根据所述调节指令对所述视觉画面的放大比例进行调节。
- 根据权利要求1-4任一项所述的方法,其特征在于,所述用户操作包括:手势、头部动作、语音输入、按键输入中的至少一种及其组合。
- 一种弱视辅助装置,其特征在于,包括:接收单元,用于接收用户操作触发的一级触发请求;所述一级触发请求包括用户想要观察的区域的位置信息;处理单元,用于根据所述位置信息获取用户想要观察的区域的视觉画面并对所述视觉画面进行处理;显示单元,用于对所述处理单元处理后的视觉画面进行显示。
- 根据权利要求6所述的装置,其特征在于,所述弱视辅助装置还包括:智能识别单元提示信号输出单元;所述接收单元还用于接收用户操作触发的二级触发请求;所述智能识别单元用于对所述视觉画面进行智能识别;所述提示信号输出单元用于根据智能识别结果输出声音提示信号和/或触觉提示信号。
- 根据权利要求6所述的装置,其特征在于,所述弱视辅助装置还包括:通信单元;所述接收单元还用于接收用户操作触发的三级触发请求;所述通信单元用于将所述视觉画面发送至远程服务设备并与所述远程服务设备建立语音连接;所述通信单元还用于接收所述远程服务设备发送的语音提示信号。
- 根据权利要求6所述的装置,其特征在于,所述接收单元还用于接收用户操作触发的调节指令;所述处理单元还用于根据所述调节指令对所述视觉画面的放大比例进行调节。
- 根据权利要求6所述的装置,其特征在于,所述用户操作 包括:手势、头部动作、语音输入、按键输入中的至少一种及其组合。
- 一种弱视辅助装置,其特征在于,所述装置包括:存储器和处理器,所述存储器用于存储计算机执行代码,所述计算机执行代码用于控制所述处理器执行权利要求1-5任一项所述的弱视辅助方法。
- 一种存储介质,其特征在于,用于储存为权利要求6-10任一项所述的弱视辅助装置所用的计算机软件指令,其包含执行权利要求1-5任一项所述的弱视辅助方法所设计的程序代码。
- 一种计算机程序产品,其特征在于,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现权利要求1-5任一项所述的弱视辅助方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201680006910.8A CN107223224A (zh) | 2016-12-19 | 2016-12-19 | 一种弱视辅助方法和装置 |
PCT/CN2016/110706 WO2018112688A1 (zh) | 2016-12-19 | 2016-12-19 | 一种弱视辅助方法和装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/110706 WO2018112688A1 (zh) | 2016-12-19 | 2016-12-19 | 一种弱视辅助方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018112688A1 true WO2018112688A1 (zh) | 2018-06-28 |
Family
ID=59927649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/110706 WO2018112688A1 (zh) | 2016-12-19 | 2016-12-19 | 一种弱视辅助方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107223224A (zh) |
WO (1) | WO2018112688A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019119290A1 (zh) * | 2017-12-20 | 2019-06-27 | 深圳前海达闼云端智能科技有限公司 | 提示信息确定方法、装置、电子设备和计算机程序产品 |
CN108392269B (zh) * | 2017-12-29 | 2021-08-03 | 广州布莱医疗科技有限公司 | 一种手术辅助方法及手术辅助机器人 |
CN109431762B (zh) * | 2018-11-12 | 2020-08-28 | 鹤壁市人民医院 | 一种弱视康复锻炼装置 |
CN114302129A (zh) * | 2021-12-27 | 2022-04-08 | 杭州瑞杰珑科技有限公司 | 一种基于ar的视力辅助方法和系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101518482A (zh) * | 2009-03-18 | 2009-09-02 | 东南大学 | 一种触觉图文显示装置及显示方法 |
CN203445974U (zh) * | 2013-08-30 | 2014-02-19 | 北京京东方光电科技有限公司 | 3d眼镜及3d显示系统 |
CN103677704A (zh) * | 2012-09-20 | 2014-03-26 | 联想(北京)有限公司 | 显示装置和显示方法 |
CN104076907A (zh) * | 2013-03-25 | 2014-10-01 | 联想(北京)有限公司 | 一种控制方法、装置和穿戴式电子设备 |
US20150193018A1 (en) * | 2014-01-07 | 2015-07-09 | Morgan Kolya Venable | Target positioning with gaze tracking |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043047A2 (en) * | 2005-10-14 | 2007-04-19 | Neurovision, Inc. | Apparatus for improving visual perception |
CN201768134U (zh) * | 2010-08-19 | 2011-03-23 | 浙江博望科技发展有限公司 | 头戴式视觉增强系统 |
CN104055658A (zh) * | 2014-07-11 | 2014-09-24 | 吴祖池 | 一种盲人导航设备 |
-
2016
- 2016-12-19 WO PCT/CN2016/110706 patent/WO2018112688A1/zh active Application Filing
- 2016-12-19 CN CN201680006910.8A patent/CN107223224A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101518482A (zh) * | 2009-03-18 | 2009-09-02 | 东南大学 | 一种触觉图文显示装置及显示方法 |
CN103677704A (zh) * | 2012-09-20 | 2014-03-26 | 联想(北京)有限公司 | 显示装置和显示方法 |
CN104076907A (zh) * | 2013-03-25 | 2014-10-01 | 联想(北京)有限公司 | 一种控制方法、装置和穿戴式电子设备 |
CN203445974U (zh) * | 2013-08-30 | 2014-02-19 | 北京京东方光电科技有限公司 | 3d眼镜及3d显示系统 |
US20150193018A1 (en) * | 2014-01-07 | 2015-07-09 | Morgan Kolya Venable | Target positioning with gaze tracking |
Also Published As
Publication number | Publication date |
---|---|
CN107223224A (zh) | 2017-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11323658B2 (en) | Display apparatus and control methods thereof | |
US10585488B2 (en) | System, method, and apparatus for man-machine interaction | |
US20200209961A1 (en) | Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device | |
CN106384098B (zh) | 基于图像的头部姿态检测方法、装置以及终端 | |
WO2018112688A1 (zh) | 一种弱视辅助方法和装置 | |
Szpiro et al. | Finding a store, searching for a product: a study of daily challenges of low vision people | |
Manduchi et al. | The last meter: blind visual guidance to a target | |
WO2017047182A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
TW201140340A (en) | Sign language translation system, sign language translation apparatus and method thereof | |
JP2017208638A (ja) | 虹彩認証装置、虹彩認証方法、及びプログラム | |
WO2020088092A1 (zh) | 关键点位置确定方法、装置及电子设备 | |
JP6381361B2 (ja) | データ処理装置、データ処理システム、データ処理装置の制御方法、並びにプログラム | |
EP4002199A1 (en) | Method and device for behavior recognition based on line-of-sight estimation, electronic equipment, and storage medium | |
US10299982B2 (en) | Systems and methods for blind and visually impaired person environment navigation assistance | |
US20200125398A1 (en) | Information processing apparatus, method for processing information, and program | |
JP5754293B2 (ja) | 記入支援システム及びサーバ装置 | |
CN111710207A (zh) | 超声演示装置及系统 | |
Muhsin et al. | Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects | |
CN114241604A (zh) | 姿态检测的方法、装置、电子设备和存储介质 | |
CN111611812A (zh) | 翻译成盲文 | |
JP6746013B1 (ja) | ペアリング表示装置、ペアリング表示システムおよびペアリング表示方法 | |
Balani et al. | Drishti-Eyes for the blind | |
Sharma et al. | VASE: Smart glasses for the visually impaired | |
Supekar et al. | Design and Development of Portable Navigation System for Disabled Person using Image, Text and Audio | |
WO2018107397A1 (zh) | 一种辅助显示方法、装置及显示系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16924820 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: 1205A, 17.10.2019 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16924820 Country of ref document: EP Kind code of ref document: A1 |