WO2023103088A1 - 视力检测方法、电子设备以及存储介质 - Google Patents

视力检测方法、电子设备以及存储介质 Download PDF

Info

Publication number
WO2023103088A1
WO2023103088A1 PCT/CN2021/140348 CN2021140348W WO2023103088A1 WO 2023103088 A1 WO2023103088 A1 WO 2023103088A1 CN 2021140348 W CN2021140348 W CN 2021140348W WO 2023103088 A1 WO2023103088 A1 WO 2023103088A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
detection
vision
detection pattern
user
Prior art date
Application number
PCT/CN2021/140348
Other languages
English (en)
French (fr)
Inventor
王维才
刘熙桐
刘天宇
Original Assignee
深圳创维-Rgb电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维-Rgb电子有限公司 filed Critical 深圳创维-Rgb电子有限公司
Publication of WO2023103088A1 publication Critical patent/WO2023103088A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular to a vision detection method, device, electronic equipment and storage medium.
  • an electronic device as a multimedia tool, can bring entertainment experiences of music, video and web pages to users.
  • the electronic device also has a time-shift function, so that the user can select the TV program of the date he or she likes according to the requirement.
  • the main purpose of the present application is to provide a vision detection method, device, electronic equipment and storage medium, aiming to solve the technical problem in the prior art that the electronic equipment has a single function, resulting in poor user experience.
  • the present application proposes a vision detection method for electronic equipment, and the method includes the following steps:
  • Using the first detection pattern perform vision detection on the user to be detected, and obtain a vision detection result.
  • the step of determining the selected detection pattern when receiving the detection instruction includes:
  • a selected detection mode corresponding to the detection instruction is determined from a plurality of preset detection modes, and the preset detection mode includes a vision detection mode or a color detection mode;
  • a selected detection pattern corresponding to the selected detection mode is determined.
  • the method before the step of generating the first detection pattern according to the distance and the selected detection pattern, the method further includes:
  • the step of generating a first detection pattern according to the distance and the selected detection pattern includes:
  • the first detection pattern is obtained according to the adjustment ratio and the selected detection pattern.
  • the first detection pattern has standard description information and visual acuity identification information; the step of using the first detection pattern to perform a visual acuity test on the user to be tested and obtain a visual acuity test result includes:
  • the feedback information does not match the standard description information of the first detection pattern, using the selected detection pattern, the distance and the feedback information to generate a second detection pattern;
  • the vision detection result is obtained.
  • the step of receiving the feedback information sent by the user to be detected with respect to the first detection pattern includes:
  • Collecting information to be identified sent by the user to be detected for the first detection pattern includes one of sound information and bone information;
  • the information to be identified is identified by using an intelligent identification model, and feedback information corresponding to the information to be identified is obtained.
  • the method before the step of using an intelligent recognition model to identify the information to be identified and obtaining feedback information corresponding to the information to be identified, the method further includes:
  • the training data including a plurality of preset identification information and a plurality of preset identification results corresponding to the plurality of preset identification information;
  • the method further includes:
  • a vision correction strategy is generated according to the vision change information, so that the user to be detected uses the vision correction strategy to perform vision correction.
  • the present application also proposes a vision detection device, which is used in electronic equipment, and the device includes:
  • the receiving module is used to determine the selected detection pattern when receiving the detection instruction
  • An acquisition module configured to acquire the distance between the user to be detected and the electronic device
  • a generating module configured to generate a first detection pattern according to the distance and the selected detection pattern
  • the detection module is configured to use the first detection pattern to perform vision detection on the user to be detected, and obtain a vision detection result.
  • the present application also proposes an electronic device, which includes: a memory, a processor, and a vision detection program stored in the memory and running on the processor, the vision detection When the program is executed by the processor, the steps of the vision detection method described in any one of the above are realized.
  • the present application also proposes a storage medium, on which a vision detection program is stored, and when the vision detection program is executed by a processor, the vision detection method as described in any one of the above is implemented. A step of.
  • the technical solution of the present application proposes a vision detection method for electronic equipment, when receiving a detection instruction, determine the selected detection pattern; obtain the distance between the user to be detected and the electronic equipment; according to the distance and The selected detection pattern is used to generate a first detection pattern; using the first detection pattern, a vision test is performed on the user to be tested to obtain a vision test result; using the method of the present application, electronic equipment is used to perform a vision test, so that Electronic equipment has the function of vision detection, which increases the functional diversity of electronic equipment and makes the functions of electronic equipment no longer single.
  • users do not need to go to special vision testing institutions for vision testing, saving time and cost of vision testing , improving the user experience.
  • FIG. 1 is a schematic structural diagram of an electronic device in a hardware operating environment involved in an embodiment of the present application
  • Fig. 2 is a schematic flow chart of an embodiment of the vision testing method of the present application
  • Fig. 3 is a structural block diagram of an embodiment of the vision detection device of the present application.
  • FIG. 1 is a schematic structural diagram of an electronic device in a hardware operating environment involved in the solution of the embodiment of the present application.
  • an electronic device includes: at least one processor 301, a memory 302, and a vision detection program stored on the memory and operable on the processor, the vision detection program configured to implement the aforementioned vision detection method steps.
  • the processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 301 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • the processor 301 may also include a main processor and a coprocessor, the main processor is a processor for processing data in the wake-up state, and is also called a CPU (Central Processing Unit, central processing unit); Low-power processor for processing data in standby state.
  • CPU Central Processing Unit, central processing unit
  • Low-power processor for processing data in standby state.
  • the processor 301 may be integrated with a GPU (Graphics Processing Unit, image processor), the GPU is used to render and draw the content that needs to be displayed on the display screen.
  • Processor 301 may also include AI (Artificial Intelligence, artificial intelligence) processor, the AI processor is used to process the operation of the vision detection method, so that the vision detection method model can be trained and learned independently, and the efficiency and accuracy are improved.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 302 may include one or more storage media, which may be non-transitory.
  • the memory 302 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory storage medium in the memory 302 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 301 to implement the vision detection method provided by the method embodiment in this application.
  • the terminal may optionally further include: a communication interface 303 and at least one peripheral device.
  • the processor 301, the memory 302, and the communication interface 303 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the communication interface 303 through a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 304 , a display screen 305 and a power supply 306 .
  • the communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output, input/output) to the processor 301 and the memory 302 .
  • the processor 301, the memory 302 and the communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 301, the memory 302 and the communication interface 303 It can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signal, also known as electromagnetic signal.
  • the radio frequency circuit 304 communicates with the communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 304 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 304 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to: a metropolitan area network, various generations of mobile communication networks (2G, 3G, 4G and 5G), a wireless local area network and/or a WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 304 may also include circuits related to NFC (Near Field Communication, short-range wireless communication), which is not limited in this application.
  • the display screen 305 is used to display the UI (User Interface, user interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display screen 305 also has the ability to collect touch signals on or above the surface of the display screen 305 .
  • the touch signal can be input to the processor 301 as a control signal for processing.
  • the display screen 305 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • there can be one display screen 305 which is the front panel of the electronic device; in other embodiments, there can be at least two display screens 305, which are respectively arranged on different surfaces of the electronic device or in a folded design; In some embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even, the display screen 305 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen. Display screen 305 can adopt LCD (LiquidCrystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light-Emitting Diode) and other materials.
  • LCD LiquidCrystal Display, liquid crystal display
  • OLED Organic Light-Emitting Diode
  • Organic Light-Emitting Diode Organic Light-Emitting Diode
  • the power supply 306 is used to supply power to various components in the electronic device.
  • Power source 306 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the embodiment of the present application also proposes a storage medium, on which a vision detection program is stored, and when the vision detection program is executed by a processor, the steps of the vision detection method as described above are realized. Therefore, details will not be repeated here. In addition, the description of the beneficial effect of adopting the same method will not be repeated here.
  • program instructions may be deployed to be executed on one electronic device, or on multiple electronic devices located at one site, or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network Prepare for execution.
  • the above-mentioned program can be stored in a storage medium. When the program is executed, It may include the processes of the embodiments of the above-mentioned methods.
  • the above-mentioned storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.
  • FIG. 2 is a schematic flow diagram of an embodiment of the vision detection method of the present application, the method is used in electronic equipment, and the method includes the following steps:
  • Step S11 Determine the selected detection pattern when the detection instruction is received.
  • the execution subject of the present application is an electronic device, the electronic device is installed with a vision detection program, and when the electronic device executes the vision detection program, the steps of the vision detection method of the present application are implemented.
  • Electronic devices can be devices such as televisions, tablets, and laptops.
  • it may be a detection instruction sent by the user to be detected.
  • the detection instruction sent by the user performs vision detection on himself.
  • the detection instruction may be sent by other users.
  • the person who sends the detection instruction may be the guardian of the child—it may be called an auxiliary user.
  • the pattern determined according to the detection instruction for vision detection is the selected detection pattern.
  • the corresponding detection mode is determined according to the detection instruction, and then the selected detection pattern for vision detection is determined according to the corresponding detection mode.
  • Step S12 Obtain the distance between the user to be detected and the electronic device.
  • the electronic device is equipped with an AI camera, and the distance between the user to be detected and the electronic device is directly determined through the AI camera.
  • the user to be detected can be determined from multiple users (including the user to be detected and the auxiliary user) captured by the AI camera.
  • the television may also be equipped with a distance sensor to directly obtain the distance between the user to be detected and the electronic device.
  • Step S13 Generate a first detection pattern according to the distance and the selected detection pattern.
  • the selected detection pattern may not match well with the distance between the user to be detected and the electronic device, resulting in inaccurate detection results obtained when directly using the selected detection pattern for vision detection.
  • the distance between the electronic devices is adjusted, and the size of the selected detection pattern is adjusted, and the selected detection pattern with the adjusted size is the first detection pattern.
  • the information contained in the first detection pattern and the information of the selected detection pattern are usually the same, only their sizes are different.
  • Step S14 Using the first detection pattern, perform vision detection on the user to be detected, and obtain a vision detection result.
  • the user to be detected sends corresponding feedback information according to the output first detection pattern, and determines whether the user to be detected has accurately spoken the description corresponding to the first detection pattern according to the feedback information sent by the user to be detected Information, and then obtain the vision detection result of the user to be detected.
  • the technical solution of the present application proposes a vision detection method for electronic equipment, when receiving a detection instruction, determine the selected detection pattern; obtain the distance between the user to be detected and the electronic equipment; according to the distance and The selected detection pattern generates a first detection pattern; using the first detection pattern, performs vision detection on the user to be detected, and obtains a vision detection result.
  • the electronic device is used for vision detection, so that the electronic device has the function of vision detection, which increases the functional diversity of the electronic device, makes the function of the electronic device no longer single, and at the same time, it can save the user from going to a special
  • the vision testing agency conducts vision testing, which saves the time and cost of vision testing and improves user experience.
  • the step of determining a selected detection pattern when receiving a detection instruction includes: determining a selected detection mode corresponding to the detection instruction from a plurality of preset detection modes, the The preset detection mode includes vision detection mode or color detection mode; and the selected detection pattern corresponding to the selected detection mode is determined.
  • the electronic device can output a detection interface, which has multiple preset detection modes in the detection interface, and the user (user to be detected or auxiliary user) sends a selection for multiple preset detection modes Operation, according to the selection operation, the selected detection mode is determined among various preset detection modes.
  • the corresponding selected detection mode can be determined directly based on the detection instruction, without outputting multiple detection modes for selection.
  • a plurality of preset detection modes include a vision detection mode and a color detection mode, and the vision detection mode corresponds to a vision detection pattern (a vision comparison table composed of "E"s of various sizes in the prior art, including multiple "E"), the color detection mode corresponds to the color detection graphics (a color comparison table composed of various color patterns, numbers or letters, etc.).
  • the method before the step of generating the first detection pattern according to the distance and the selected detection pattern, the method further includes: acquiring the preset size and preset size corresponding to the selected detection pattern distance; the step of generating a first detection pattern according to the distance and the selected detection pattern includes: determining the adjustment ratio corresponding to the preset size according to the preset distance and the distance; The adjustment ratio and the selected detection pattern are used to obtain the first detection pattern.
  • the vision detection mode corresponds to a standard vision comparison table.
  • the standard vision comparison table includes multiple "E", each "E” corresponds to a vision identification information (eye vision condition, such as vision 1.0), a preset size , a standard description (such as the opening direction of "E") and a preset distance (the distance between the user and the vision chart).
  • vision identification information eye vision condition, such as vision 1.0
  • preset size a standard description
  • standard description such as the opening direction of "E”
  • a preset distance the distance between the user and the vision chart.
  • the color detection mode corresponds to a standard color comparison table.
  • the color comparison table includes multiple color power detection patterns, and each color power detection pattern corresponds to a vision identification information (the chromaticity of the eyes, such as color weakness), a Preset size, a standard description (such as the number included in the pattern), and a preset distance (the distance between the user and the standard color table).
  • a vision identification information the chromaticity of the eyes, such as color weakness
  • a Preset size such as the number included in the pattern
  • a standard description such as the number included in the pattern
  • a preset distance the distance between the user and the standard color table.
  • the preset size of the color power detection pattern needs to be adjusted to obtain a The size of the color power detection pattern, and then do not adjust the standard description information and vision identification information of the color power detection pattern.
  • the color power detection pattern of the output size is a first detection pattern.
  • the step of performing a vision test on the user to be detected by using the first detection pattern to obtain a vision test result includes: outputting the first detection pattern; Feedback information sent by the first detection pattern; if the feedback information does not match the standard description information of the first detection pattern, using the selected detection pattern, the distance and the feedback information to generate a second Two detection patterns: using the second detection pattern to update the first detection pattern, and performing the step of outputting the first detection pattern until the feedback information of the user to be detected is consistent with the first detection pattern
  • the standard description information is matched, and the vision detection result is obtained based on the vision identification information of the first detection pattern.
  • the feedback information is generally expressed as the opening direction of "E" and the specific information included in the color power detection pattern (numbers, letters or animal patterns, etc.). If the feedback information is the same as the standard description information, the feedback information matches the standard description information of the first detection pattern; otherwise, the feedback information is wrong.
  • the mismatch means that the user to be detected said the wrong direction of "E", and the second detection pattern needs to be generated by using the wrong feedback information, the selected detection pattern and the distance.
  • the vision identification information corresponding to the second detection pattern is lower than the vision identification information of the first detection pattern, for example, the vision identification information of the first detection pattern is 1.0, and the vision identification information of the second detection pattern is 0.8.
  • the selected detection pattern select the adjustment detection pattern with lower vision identification information, and then based on the adjustment detection pattern, refer to the generation method of the first detection pattern (the above-mentioned first adjustment ratio can be used, or generate new first adjustment ratio) to generate a second detection pattern, at this time, the size of the second detection pattern is obviously larger than the first detection pattern. Then it continues to loop until the feedback information of the user to be detected matches the standard description information of the first detection pattern, indicating that the feedback information of the user to be detected is correct, and the vision identification information corresponding to this loop round is the vision detection result.
  • the generation method of the first detection pattern the above-mentioned first adjustment ratio can be used, or generate new first adjustment ratio
  • the first The visual acuity identification information corresponding to the second detection pattern is lower than the visual acuity identification information of the first detection pattern.
  • the visual acuity identification information of the first detection pattern is normal, and the visual acuity identification information of the second detection pattern is color weak.
  • the adjustment detection figure with low vision identification information is selected from the selected detection figures, and then based on the adjustment test figure (continue to use the above-mentioned second adjustment ratio, or generate a new second adjustment ratio), refer to No.
  • a method for generating a detection pattern generating a second detection pattern. Then it continues to loop until the feedback information of the user to be detected matches the standard description information of the first detection pattern, indicating that the feedback information of the user to be detected is correct. At this time, the vision identification information corresponding to this loop round is the vision detection result.
  • multiple different first detection patterns with the same vision identification information can be output, when the accuracy rate of the feedback information corresponding to the multiple first detection patterns is higher than a set value (set by the user based on demand) , indicating that the feedback information matches the standard description information of the first detection pattern, otherwise it does not match.
  • the step of receiving the feedback information sent by the user to be detected with respect to the first detection pattern includes: collecting the information to be identified sent by the user to be detected with respect to the first detection pattern, the to-be-identified
  • the information includes one of sound information and skeleton information; the information to be identified is identified by using an intelligent identification model, and feedback information corresponding to the information to be identified is obtained.
  • the method further includes: acquiring training data, the training data includes a plurality of The preset identification information and a plurality of preset identification results corresponding to the plurality of preset identification information; input the plurality of preset identification information and the plurality of preset identification results into the initial model for training, and obtain the Intelligent recognition model.
  • the preset identification information is the preset information to be identified (such as sound information and bone information), and the preset recognition result is the accurate recognition result corresponding to the preset information, such as the accurate recognition result corresponding to the sound information, or bone information The corresponding accurate recognition results.
  • the training data is the training data related to the sound - the default recognition information is sound information; for the intelligent recognition model of bone recognition, the training data is the training data related to the bone - the default recognition information is the bone information.
  • the trained intelligent recognition model can be obtained directly without a training process.
  • the initial model can be a neural network model, etc., which is not limited in this application.
  • the method further includes: acquiring the historical vision test result corresponding to the user to be tested; and obtaining vision change information according to the historical vision test result and the vision test result ; Generate a correction strategy according to the vision change information.
  • the historical vision test result can be the vision test result of the user to be tested at the historical moment input by the user (the user to be tested or the auxiliary user), or it can be obtained according to the method of this application at the historical moment. Vision test results. Usually, there is a certain time interval, such as one week or one month, between the historical vision test results and the vision test results.
  • the detection of the user's vision and color is realized, and the vision detection is more diverse.
  • FIG. 3 is a structural block diagram of an embodiment of the vision detection device of the present application.
  • the device is used in electronic equipment. Based on the same inventive concept as the previous embodiment, the device includes:
  • the receiving module 10 is used to determine the selected detection pattern when receiving the detection instruction
  • An acquisition module 20 configured to acquire the distance between the user to be detected and the electronic device
  • a generating module 30, configured to generate a first detection pattern according to the distance and the selected detection pattern
  • the detection module 40 is configured to use the first detection pattern to perform vision detection on the user to be detected, and obtain a vision detection result.
  • the receiving module 10 is also configured to determine a selected detection mode corresponding to the detection instruction from a plurality of preset detection modes when receiving the detection instruction, and the preset detection mode includes the vision detection mode or A color detection mode; determining a selected detection pattern corresponding to the selected detection mode.
  • the device also includes:
  • a size acquisition module configured to acquire a preset size and a preset distance corresponding to the selected detection figure
  • the generating module 30 is further configured to determine an adjustment ratio corresponding to the preset size according to the preset distance and the distance; and obtain the first detection pattern according to the adjustment ratio and the selected detection pattern.
  • the first detection pattern has standard description information and vision identification information; the device also includes:
  • An output module configured to output the first detection pattern; receive feedback information sent by the user to be detected for the first detection pattern; if the feedback information does not match the standard description information of the first detection pattern, Then use the selected detection pattern, the distance and the feedback information to generate a second detection pattern; use the second detection pattern to update the first detection pattern, and execute the output of the first In the step of detecting patterns, until the feedback information of the user to be detected matches the standard description information of the first detection pattern, based on the vision identification information of the first detection pattern, the vision detection result is obtained.
  • the output module is also used to collect the information to be identified sent by the user to be detected for the first detection pattern, the information to be identified includes one of sound information and bone information; The information to be identified is identified, and feedback information corresponding to the information to be identified is obtained.
  • a further output module is also used to obtain training data, the training data includes a plurality of preset identification information and a plurality of preset identification results corresponding to the plurality of preset identification information; the plurality of preset identification information and multiple preset recognition results are input into the initial model for training to obtain the intelligent recognition model.
  • the device also includes:
  • a correction module configured to acquire the historical vision test results corresponding to the user to be detected; obtain vision change information according to the historical vision test results and the vision test results; generate vision correction strategies according to the vision change information, to Make the user to be detected use the vision correction strategy to perform vision correction.

Abstract

本申请公开一种视力检测方法,包括:在接收到检测指令时,确定出选定检测图形;获取待检测用户与所述电子设备之间的距离;根据所述距离和所述选定检测图形,生成第一检测图形;利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。本申请还公开一种视力检测装置、电子设备以及存储介质,利用本发明的方法,电子设备具有视力检测的功能,增加了电子设备的功能多样性,同时,可以使用户不需要前往专门的视力检测机构进行视力检测,节省了视力检测时间和成本,提高了用户体验。

Description

视力检测方法、电子设备以及存储介质
本申请要求于2021年12月9日申请的、申请号为202111502807.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,特别涉及一种视力检测方法、装置、电子设备以及存储介质。
背景技术
目前,电子设备作为一种多媒体工具,可以给用户带来音乐、视频和网页的娱乐体验。同时,电子设备还有时移功能,便于用户可以按照需求选择自己喜欢的日期的电视节目。
但是,现有的电子设备的功能单一,导致用户体验较差。
技术问题
本申请的主要目的是提供一种视力检测方法、装置、电子设备以及存储介质,旨在解决现有技术中有的电子设备的功能单一,导致用户体验较差的技术问题。
技术解决方案
为实现上述目的,本申请提出一种视力检测方法,用于电子设备,所述方法包括以下步骤:
在接收到检测指令时,确定出选定检测图形;
获取待检测用户与所述电子设备之间的距离;
根据所述距离和所述选定检测图形,生成第一检测图形;
利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。
在一实施例中,所述在接收到检测指令时,确定出选定检测图形的步骤,包括:
在接收到检测指令时,从多个预设检测模式中确定出与所述检测指令对应的选定检测模式,所述预设检测模式包括视力检测模式或颜色检测模式;
确定出与所述选定检测模式对应的选定检测图形。
在一实施例中,所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤之前,所述方法还包括:
获取所述选定检测图形对应的预设尺寸和预设距离;
所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤,包括:
根据所述预设距离和所述距离,确定所述预设尺寸对应的调整比例;
根据所述调整比例和所述选定检测图形,获得所述第一检测图形。
在一实施例中,所述第一检测图形具有标准描述信息和视力标识信息;所述利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果的步骤,包括:
输出所述第一检测图形;
接收所述待检测用户针对所述第一检测图形发送的反馈信息;
若所述反馈信息与所述第一检测图形的标准描述信息不匹配,则利用所述选定检测图形、所述距离和所述反馈信息,生成第二检测图形;
利用所述第二检测图形对所述第一检测图形进行更新,并执行所述输出所述第一检测图形的步骤,直到所述待检测用户的反馈信息与第一检测图形的标准描述信息匹配,基于该第一检测图形的视力标识信息,获得视力检测结果。
在一实施例中,所述接收所述待检测用户针对所述第一检测图形发送的反馈信息的步骤,包括:
采集所述待检测用户针对所述第一检测图形发送的待识别信息,所述待识别信息包括声音信息和骨骼信息中的一种;
利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息。
在一实施例中,所述利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息的步骤之前,所述方法还包括:
获取训练数据,所述训练数据包括多个预设识别信息和多个所述预设识别信息对应的多个预设识别结果;
将多个所述预设识别信息和多个所述预设识别结果输入初始模型中进行训练,获得所述智能识别模型。
在一实施例中,所述获得视力检测结果的步骤之后,所述方法还包括:
获取所述待检测用户对应的历史视力检测结果;
根据所述历史视力检测结果和所述视力检测结果,获得视力变化信息;
根据所述视力变化信息,生成视力矫正策略,以使所述待检测用户利用所述视力矫正策略,进行视力矫正。
此外,为实现上述目的,本申请还提出了一种视力检测装置,其用于电子设备,所述装置包括:
接收模块,用于在接收到检测指令时,确定出选定检测图形;
获取模块,用于获取待检测用户与所述电子设备之间的距离;
生成模块,用于根据所述距离和所述选定检测图形,生成第一检测图形;
检测模块,用于利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。
此外,为实现上述目的,本申请还提出了一种电子设备,所述电子设备包括:存储器、处理器及存储在所述存储器上并在所述处理器上运行视力检测程序,所述视力检测程序被所述处理器执行时实现如上述任一项所述的视力检测方法的步骤。
此外,为实现上述目的,本申请还提出了一种存储介质,所述存储介质上存储有视力检测程序,所述视力检测程序被处理器执行时实现如上述任一项所述的视力检测方法的步骤。
有益效果
本申请技术方案提出了一种视力检测方法,用于电子设备,在接收到检测指令时,确定出选定检测图形;获取待检测用户与所述电子设备之间的距离;根据所述距离和所述选定检测图形,生成第一检测图形;利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果;利用本申请的方法,利用电子设备进行视力检测,使得电子设备具有视力检测的功能,增加了电子设备的功能多样性,使得电子设备的功能不再单一,同时,可以使用户不需要前往专门的视力检测机构进行视力检测,节省了视力检测时间和成本,提高了用户体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1为本申请实施例方案涉及的硬件运行环境的电子设备结构示意图;
图2为本申请视力检测方法一实施例的流程示意图;
图3为本申请视力检测装置一实施例的结构框图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1,图1为本申请实施例方案涉及的硬件运行环境的电子设备结构示意图。
通常,电子设备包括:至少一个处理器301、存储器302以及存储在所述存储器上并可在所述处理器上运行的视力检测程序,所述视力检测程序配置为实现如前所述的视力检测方法的步骤。
处理器301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central ProcessingUnit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。处理器301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关视力检测方法操作,使得视力检测方法模型可以自主训练学习,提高效率和准确度。
存储器302可以包括一个或多个存储介质,该存储介质可以是非暂态的。存储器302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器302中的非暂态的存储介质用于存储至少一个指令,该至少一个指令用于被处理器301所执行以实现本申请中方法实施例提供的视力检测方法。
在一些实施例中,终端还可选包括有:通信接口303和至少一个外围设备。处理器301、存储器302和通信接口303之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与通信接口303相连。具体地,外围设备包括:射频电路304、显示屏305和电源306中的至少一种。
通信接口303可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器301和存储器302。在一些实施例中,处理器301、存储器302和通信接口303被集成在同一芯片或电路板上;在一些其他实施例中,处理器301、存储器302和通信接口303中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路304用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路304通过电磁信号与通信网络以及其他通信设备进行通信。射频电路304将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路304包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路304可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路304还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏305用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏305是触摸显示屏时,显示屏305还具有采集在显示屏305的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器301进行处理。此时,显示屏305还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏305可以为一个,电子设备的前面板;在另一些实施例中,显示屏305可以为至少两个,分别设置在电子设备的不同表面或呈折叠设计;在再一些实施例中,显示屏305可以是柔性显示屏,设置在电子设备的弯曲表面上或折叠面上。甚至,显示屏305还可以设置成非矩形的不规则图形,也即异形屏。显示屏305可以采用LCD(LiquidCrystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
电源306用于为电子设备中的各个组件进行供电。电源306可以是交流电、直流电、一次性电池或可充电电池。当电源306包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。本领域技术人员可以理解,图1中示出的结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
此外,本申请实施例还提出一种存储介质,所述存储介质上存储有视力检测程序,所述视力检测程序被处理器执行时实现如上文所述的视力检测方法的步骤。因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述。确定为示例,程序指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备备上执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,上述的程序可存储于一取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,上述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
实施例一:
参照图2,图2为本申请视力检测方法一实施例的流程示意图,所述方法用于电子设备,所述方法包括以下步骤:
步骤S11:在接收到检测指令时,确定出选定检测图形。
需要说明的是,本申请的执行主体是电子设备,电子设备安装有视力检测程序,电子设备执行视力检测程序时,实现本申请的视力检测方法的步骤。电子设备可以是电视机、平板电脑和笔记本电脑等设备。
在具体应用中,可以是待检测用户发送的检测指令,例如,待检测用户是成人,则其发送的检测指令用户对自己进行视力检测,在另一些实施例中,检测指令可以是其他用户发送,例如待检测用户为儿童,发送检测指令的即可能是儿童的监护人——可以称为辅助用户。
接收到检测指令时,根据检测指令确定出用于进行视力检测的图形即为选定检测图形。通常,根据检测指令确定出对应的检测模式,然后根据该对应的检测模式,确定出用于进行视力检测的选定检测图形。
步骤S12:获取待检测用户与所述电子设备之间的距离。
通常电子设备安装有AI摄像头,通过AI摄像头直接确定出待检测用户与所述电子设备之间的距离。在一些实施例中,可以在确定出待检测用户与所述电子设备之间的距离之前,在AI摄像头拍摄的多个用户(包括待检测用户和辅助用户)中确定出待检测用户。
在一些实施例中,电视机还可以安装有距离感应器,直接获得待检测用户与所述电子设备之间的距离。
步骤S13:根据所述距离和所述选定检测图形,生成第一检测图形。
选定检测图形可能不能与待检测用户与所述电子设备之间的距离很好的匹配,导致直接用选定检测图形进行视力检测时,获得的检测结果不准确,需要根据待检测用户与所述电子设备之间的距离,对选定检测图形的尺寸进行调整,调整尺寸的选定检测图形即为第一检测图形。
可以理解的是,对于第一检测图形包含的信息和选定检测图形的信息通常是一致的,只是它们的尺寸不同而已。
步骤S14:利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。
输出第一检测图形,待检测用户根据输出的第一检测图形,发送对应的反馈信息,根据待检测用户发送的反馈信息,确定出待检测用户是否准确的说出了第一检测图形对应的描述信息,进而获得待检测用户的视力检测结果。
本申请技术方案提出了一种视力检测方法,用于电子设备,在接收到检测指令时,确定出选定检测图形;获取待检测用户与所述电子设备之间的距离;根据所述距离和所述选定检测图形,生成第一检测图形;利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。利用本申请的方法,利用电子设备进行视力检测,使得电子设备具有视力检测的功能,增加了电子设备的功能多样性,使得电子设备的功能不再单一,同时,可以使用户不需要前往专门的视力检测机构进行视力检测,节省了视力检测时间和成本,提高了用户体验。
实施例二:
在一实施例中,所述在接收到检测指令时,确定出选定检测图形的步骤,包括:从多个预设检测模式中确定出与所述检测指令对应的选定检测模式,所述预设检测模式包括视力检测模式或颜色检测模式;确定出与所述选定检测模式对应的选定检测图形。
发送的检测指令后,电子设备可以输出检测界面,检测界面中具有多个预设检测模式中的多种预设检测模式,用户(待检测用户或辅助用户)针对多种预设检测模式发送选择操作,根据选择操作,在多种预设检测模式中确定出选定检测模式。
在一些实施例中,可以直接基于检测指令,确定出对应的选定检测模式,不需要输出多种检测模式,再进行选择。
其中,多个预设检测模式包括视力检测模式和颜色检测模式,视力检测模式对应视力检测图形(现有技术中多种尺寸的“E”组成的视力对照表,包括多个用于进行检测的“E”),颜色检测模式对应颜色检测图形(各种颜色的图案、数字或字母等组成的颜色对照表)。
在一些实施例中,所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤之前,所述方法还包括:获取所述选定检测图形对应的预设尺寸和预设距离;所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤,包括:根据所述预设距离和所述距离,确定所述预设尺寸对应的调整比例;根据所述调整比例和所述选定检测图形,获得所述第一检测图形。
通常,视力检测模式对应有标准的视力对照表,标准视力对照表包括多个“E”,每一个“E”对应有一个视力标识信息(眼睛的视力情况,例如视力1.0)、一个预设尺寸、一个标准描述信息(例如“E”的开口方向)和一个预设距离(用户与视力对照表的距离)。对于任意一个“E”,需要利用距离和预设距离(获得一个第一调整比例,对该“E”放大或缩小),对该“E”的预设尺寸进行调整,获得具有输出尺寸的“E”,然后对于该“E”的标准描述信息和视力标识信息不做调整,此时,该输出尺寸的“E”即为一个第一检测图形。
同理,颜色检测模式对应有标准的颜色对照表,颜色对照表包括多个色力检测图案,每一个色力检测图案对应有一个视力标识信息(眼睛的色度视力情况,例如色弱)、一个预设尺寸、一个标准描述信息(例如图案包括的数字)和一个预设距离(用户与标准颜色对照表的距离)。对于任意一个色力检测图案,需要利用距离和预设距离(获得一个第二调整比例,对于该色力检测图案放大或缩小),对该色力检测图案的预设尺寸进行调整,获得具有输出尺寸的色力检测图案,然后对于该色力检测图案的标准描述信息和视力标识信息不做调整,此时,该输出尺寸的色力检测图案即为一个第一检测图形。
在一些实施例中,所述利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果的步骤,包括:输出所述第一检测图形;接收所述待检测用户针对所述第一检测图形发送的反馈信息;若所述反馈信息与所述第一检测图形的标准描述信息不匹配,则利用所述选定检测图形、所述距离和所述反馈信息,生成第二检测图形;利用所述第二检测图形对所述第一检测图形进行更新,并执行所述输出所述第一检测图形的步骤,直到所述待检测用户的反馈信息与第一检测图形的标准描述信息匹配,基于该第一检测图形的视力标识信息,获得视力检测结果。
其中,反馈信息一般表述为“E”的开口方向和色力检测图案包括的具体信息(数字、字母或动物图案等)。若反馈信息与标准描述信息相同,则所述反馈信息所述第一检测图形的标准描述信息匹配,否则不匹配,反馈信息错误。
对于视力检测模式,不匹配,意味着待检测用户说错了“E”的方向,需要利用错误的反馈信息、选定检测图形和距离,生成第二检测图形。一般而言,第二检测图形对应的视力标识信息会低于第一检测图形的视力标识信息,例如,第一检测图形的视力标识信息为1.0,则第二检测图形的视力标识信息为0.8。通常是,在选定检测图形中选出视力标识信息较低的调整检测图形,然后以该调整检测图形为基础,参照第一检测图形的生成方法(可以沿用上述的第一调整比例,或生成新的第一调整比例),生成第二检测图形,此时,第二检测图形明显大于第一检测图形的尺寸。然后一直循环,直到待检测用户的反馈信息与第一检测图形的标准描述信息匹配,表明待检测用户的反馈信息正确,该循环轮次对应的视力标识信息即为视力检测结果。
对于颜色检测模式,不匹配时,意味着待检测用户说错了色力检测图案的具体内容,需要利用错误的反馈信息、选定检测图形和距离,生成第二检测图形,一般而言,第二检测图形对应的视力标识信息会低于第一检测图形的视力标识信息,例如,第一检测图形的视力标识信息为正常,则第二检测图形的视力标识信息为色弱。通常是,在选定检测图形中选出视力标识信息较低的调整检测图形,然后以该调整检测图形为基础(沿用上述的第二调整比例,或生成新的第二调整比例),参照第一检测图形的生成方法,生成第二检测图形。然后一直循环,直到待检测用户的反馈信息与第一检测图形的标准描述信息匹配,表明待检测用户的反馈信息正确,此时该循环轮次对应的视力标识信息即为视力检测结果。
在一些实施例中,可以输出多个视力标识信息相同的不同的第一检测图形,当多个第一检测图形对应的反馈信息的准确率高于一个设定值(用户基于需求设定)时,表示所述反馈信息与所述第一检测图形的标准描述信息匹配,否则不匹配。
进一步的,所述接收所述待检测用户针对所述第一检测图形发送的反馈信息的步骤,包括:采集所述待检测用户针对所述第一检测图形发送的待识别信息,所述待识别信息包括声音信息和骨骼信息中的一种;利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息。
电子设备可以安装有麦克风和AI摄像头,用于分别获取声音信息和骨骼信息,其中,所述骨骼信息包括左眼信息、右眼信息、左耳信息、右耳信息、鼻子信息、颈部信息、左肩信息、右肩信息、左肘信息、右肘信息、左手腕信息、右手腕信息、腰部左侧信息、腰部右侧信息、左膝盖信息、右膝盖信息、左脚腕信息和右脚腕信息中的至少一种。
具体的,所述利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息的步骤之前,所述方法还包括:获取训练数据,所述训练数据包括多个预设识别信息和多个所述预设识别信息对应的多个预设识别结果;将多个所述预设识别信息和多个所述预设识别结果输入初始模型中进行训练,获得所述智能识别模型。
预设识别信息为待进行识别的预设信息(例如声音信息和骨骼信息),预设识别结果即为预设信息对应的准确的识别结果,例如声音信息对应的准确识别结果,或,骨骼信息对应的准确识别结果。
对于声音识别的智能识别模型,训练数据与声音相关的训练数据——预设识别信息为声音信息,对于骨骼识别的智能识别模型,训练数据与骨骼相关的训练数据——预设识别信息为骨骼信息。
在一些实施例中,可以直接获取训练好的智能识别模型,不需要进行训练过程,一般应用中,初始模型可以是神经网络模型等,本申请不做限定。
进一步的,所述获得视力检测结果的步骤之后,所述方法还包括:获取所述待检测用户对应的历史视力检测结果;根据所述历史视力检测结果和所述视力检测结果,获得视力变化信息;根据所述视力变化信息,生成校正策略。
可以理解的是,历史视力检测结果可以是用户(待检测用户或辅助用户)输入的待检测用户在历史时刻的视力检测结果,也可以是按照本申请的方法,在历史时刻获得待检测用户的视力检测结果。通常,历史视力检测结果和视力检测结果之间有一定的时间间隔,例如一周或一个月。
将历史视力检测结果和视力检测结果进行比对,确定待检测用户的视力变化信息——视力变差、视力变好或有色弱趋势等。然后基于视力变化信息获得校正策略,例如视力变好,则输出鼓励语言,督促再接再厉;视力变差,则输出提示信息,提示少看电视和少玩游戏等。
同时,在本申请实施例中,实现了对用户的视力和颜色的检测,视力检测更加多样。
参照图3,图3为本申请视力检测装置一实施例的结构框图,所述装置用于电子设备,基于与前述实施例相同的发明构思,所述装置包括:
接收模块10,用于在接收到检测指令时,确定出选定检测图形;
获取模块20,用于获取待检测用户与所述电子设备之间的距离;
生成模块30,用于根据所述距离和所述选定检测图形,生成第一检测图形;
检测模块40,用于利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。
进一步的,接收模块10,还用于在接收到检测指令时,从多个预设检测模式中确定出与所述检测指令对应的选定检测模式,所述预设检测模式包括视力检测模式或颜色检测模式;确定出与所述选定检测模式对应的选定检测图形。
进一步的,装置还包括:
尺寸获取模块,用于获取所述选定检测图形对应的预设尺寸和预设距离;
生成模块30,还用于根据所述预设距离和所述距离,确定所述预设尺寸对应的调整比例;根据所述调整比例和所述选定检测图形,获得所述第一检测图形。
进一步的,所述第一检测图形具有标准描述信息和视力标识信息;装置还包括:
输出模块,用于输出所述第一检测图形;接收所述待检测用户针对所述第一检测图形发送的反馈信息;若所述反馈信息与所述第一检测图形的标准描述信息不匹配,则利用所述选定检测图形、所述距离和所述反馈信息,生成第二检测图形;利用所述第二检测图形对所述第一检测图形进行更新,并执行所述输出所述第一检测图形的步骤,直到所述待检测用户的反馈信息与第一检测图形的标准描述信息匹配,基于该第一检测图形的视力标识信息,获得视力检测结果。
进一步的,输出模块,还用于采集所述待检测用户针对所述第一检测图形发送的待识别信息,所述待识别信息包括声音信息和骨骼信息中的一种;利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息。
进一步的输出模块,还用于获取训练数据,所述训练数据包括多个预设识别信息和多个所述预设识别信息对应的多个预设识别结果;将多个所述预设识别信息和多个所述预设识别结果输入初始模型中进行训练,获得所述智能识别模型。
进一步的,装置还包括:
矫正模块,用于获取所述待检测用户对应的历史视力检测结果;根据所述历史视力检测结果和所述视力检测结果,获得视力变化信息;根据所述视力变化信息,生成视力矫正策略,以使所述待检测用户利用所述视力矫正策略,进行视力矫正。
需要说明的是,由于本实施例的装置所执行的步骤与前述方法实施例的步骤相同,其具体的实施方式以及可以达到的技术效果都可参照前述实施例,这里不再赘述。
以上所述仅为本申请的可选实施例,并非因此限制本申请的专利范围,凡是在本申请的发明构思下,利用本申请说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本申请的专利保护范围内。

Claims (15)

  1. 一种视力检测方法,其用于电子设备,所述方法包括以下步骤:
    在接收到检测指令时,确定出选定检测图形;
    获取待检测用户与所述电子设备之间的距离;
    根据所述距离和所述选定检测图形,生成第一检测图形;以及
    利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果。
  2. 如权利要求1所述的方法,其中,所述在接收到检测指令时,确定出选定检测图形的步骤,包括:
    在接收到检测指令时,从多个预设检测模式中确定出与所述检测指令对应的选定检测模式,所述预设检测模式包括视力检测模式或颜色检测模式;以及
    确定出与所述选定检测模式对应的选定检测图形。
  3. 如权利要求1所述的方法,其中,所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤之前,所述方法还包括:
    获取所述选定检测图形对应的预设尺寸和预设距离;
    所述根据所述距离和所述选定检测图形,生成第一检测图形的步骤,包括:
    根据所述预设距离和所述距离,确定所述预设尺寸对应的调整比例;以及
    根据所述调整比例和所述选定检测图形,获得所述第一检测图形。
  4. 如权利要求2所述的方法,其中,所述第一检测图形具有标准描述信息和视力标识信息;
    所述利用所述第一检测图形,对所述待检测用户进行视力检测,获得视力检测结果的步骤,包括:
    输出所述第一检测图形;
    接收所述待检测用户针对所述第一检测图形发送的反馈信息;
    若所述反馈信息与所述第一检测图形的标准描述信息不匹配,则利用所述选定检测图形、所述距离和所述反馈信息,生成第二检测图形;以及
    利用所述第二检测图形对所述第一检测图形进行更新,并执行所述输出所述第一检测图形的步骤,直到所述待检测用户的反馈信息与第一检测图形的标准描述信息匹配,基于该第一检测图形的视力标识信息,获得视力检测结果。
  5. 如权利要求4所述的方法,其中,所述接收所述待检测用户针对所述第一检测图形发送的反馈信息的步骤,包括:
    采集所述待检测用户针对所述第一检测图形发送的待识别信息,所述待识别信息包括声音信息和骨骼信息中的一种;以及
    利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息。
  6. 如权利要求5所述的方法,其中,所述利用智能识别模型,对所述待识别信息进行识别,获得所述待识别信息对应的反馈信息的步骤之前,所述方法还包括:
    获取训练数据,所述训练数据包括多个预设识别信息和多个所述预设识别信息对应的多个预设识别结果;以及
    将多个所述预设识别信息和多个所述预设识别结果输入初始模型中进行训练,获得所述智能识别模型。
  7. 如权利要求1所述的方法,其中,所述获得视力检测结果的步骤之后,所述方法还包括:
    获取所述待检测用户对应的历史视力检测结果;
    根据所述历史视力检测结果和所述视力检测结果,获得视力变化信息;以及
    根据所述视力变化信息,生成视力矫正策略,以使所述待检测用户利用所述视力矫正策略,进行视力矫正。
  8. 如权利要求1所述的方法,其中,通过AI摄像头或距离感应器确定所述待检测用户与所述电子设备之间的距离。
  9. 如权利要求1所述的方法,其中,根据所述待检测用户与所述电子设备之间的距离,对选定检测图形的尺寸进行调整,调整尺寸的选定检测图形为所述第一检测图形。
  10. 如权利要求1所述的方法,其中,所述第一检测图形包含的信息和所述选定检测图形的信息是一致的。
  11. 如权利要求2所述的方法,其中,所述多个预设检测模式包括视力检测模式和颜色检测模式,所述视力检测模式对应视力检测图形,所述颜色检测模式对应颜色检测图形。
  12. 如权利要求11所述的方法,其中,所述颜色检测模式对应有标准的颜色对照表,所述颜色对照表包括多个色力检测图案,每一个色力检测图案对应有一视力标识信息、一预设尺寸、一标准描述信息和一预设距离。
  13. 如权利要求5所述的方法,其中,所述骨骼信息包括左眼信息、右眼信息、左耳信息、右耳信息、鼻子信息、颈部信息、左肩信息、右肩信息、左肘信息、右肘信息、左手腕信息、右手腕信息、腰部左侧信息、腰部右侧信息、左膝盖信息、右膝盖信息、左脚腕信息和右脚腕信息中的至少一种。
  14. 一种电子设备,其包括:存储器、处理器及存储在所述存储器上并在所述处理器上运行视力检测程序,所述视力检测程序被所述处理器执行时实现如权利要求1至13中任一项所述的视力检测方法的步骤。
  15. 一种存储介质,其上存储有视力检测程序,所述视力检测程序被处理器执行时实现如权利要求1至13中任一项所述的视力检测方法的步骤。
PCT/CN2021/140348 2021-12-09 2021-12-22 视力检测方法、电子设备以及存储介质 WO2023103088A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111502807.1A CN114190880A (zh) 2021-12-09 2021-12-09 视力检测方法、装置、电子设备以及存储介质
CN202111502807.1 2021-12-09

Publications (1)

Publication Number Publication Date
WO2023103088A1 true WO2023103088A1 (zh) 2023-06-15

Family

ID=80651828

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140348 WO2023103088A1 (zh) 2021-12-09 2021-12-22 视力检测方法、电子设备以及存储介质

Country Status (2)

Country Link
CN (1) CN114190880A (zh)
WO (1) WO2023103088A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115054198B (zh) * 2022-06-10 2023-07-21 广州视域光学科技股份有限公司 一种远程智能视力检测方法、系统和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109431446A (zh) * 2018-08-03 2019-03-08 中山大学附属眼科医院验光配镜中心 一种在线视力检查方法、装置、终端设备及存储介质
CN109431445A (zh) * 2018-08-03 2019-03-08 广州视光专业技术服务有限公司 一种视力监测方法、装置、终端设备及存储介质
CN110353622A (zh) * 2018-10-16 2019-10-22 武汉交通职业学院 一种视力检测方法及视力检测器
CN111493810A (zh) * 2020-04-13 2020-08-07 深圳创维-Rgb电子有限公司 一种基于显示设备的视力检测方法、显示设备及存储介质
WO2021068486A1 (zh) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 基于图像识别的视力检测方法、装置、及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109431446A (zh) * 2018-08-03 2019-03-08 中山大学附属眼科医院验光配镜中心 一种在线视力检查方法、装置、终端设备及存储介质
CN109431445A (zh) * 2018-08-03 2019-03-08 广州视光专业技术服务有限公司 一种视力监测方法、装置、终端设备及存储介质
CN110353622A (zh) * 2018-10-16 2019-10-22 武汉交通职业学院 一种视力检测方法及视力检测器
WO2021068486A1 (zh) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 基于图像识别的视力检测方法、装置、及计算机设备
CN111493810A (zh) * 2020-04-13 2020-08-07 深圳创维-Rgb电子有限公司 一种基于显示设备的视力检测方法、显示设备及存储介质

Also Published As

Publication number Publication date
CN114190880A (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
US9602584B2 (en) System with distributed process unit
WO2019196707A1 (zh) 一种移动终端控制方法及移动终端
WO2019228163A1 (zh) 扬声器控制方法及移动终端
CN108491123B (zh) 一种调节应用程序图标的方法及移动终端
CN107809658A (zh) 一种弹幕内容显示方法和终端
CN106303029A (zh) 一种画面的旋转控制方法、装置及移动终端
US20220286503A1 (en) Synchronization method and electronic device
CN106990831A (zh) 一种调节屏幕亮度的方法及终端
CN110007758B (zh) 一种终端的控制方法及终端
CN110827820B (zh) 语音唤醒方法、装置、设备、计算机存储介质及车辆
CN110881212B (zh) 设备省电的方法、装置、电子设备及介质
WO2020211607A1 (zh) 生成视频的方法、装置、电子设备及介质
CN109461124A (zh) 一种图像处理方法及终端设备
CN108668024A (zh) 一种语音处理方法及终端
CN110738971B (zh) 用于墨水屏的页面刷新方法及装置
WO2023103088A1 (zh) 视力检测方法、电子设备以及存储介质
CN110070143B (zh) 获取训练数据的方法、装置、设备及存储介质
US20220132250A1 (en) Mobile Terminal and Control Method
CN109451158B (zh) 一种提醒方法和装置
CN108235084B (zh) 一种视频播放方法及移动终端
CN107729100B (zh) 一种界面显示控制方法及移动终端
CN112100528A (zh) 对搜索结果评分模型进行训练的方法、装置、设备、介质
CN110830619A (zh) 一种显示方法及电子设备
CN115774655A (zh) 数据处理方法、装置、电子设备及计算机可读介质
CN111614841A (zh) 闹钟控制方法、装置、存储介质及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21967000

Country of ref document: EP

Kind code of ref document: A1