CN110338748B - Method for quickly positioning vision value, storage medium, terminal and vision detector - Google Patents

Method for quickly positioning vision value, storage medium, terminal and vision detector Download PDF

Info

Publication number
CN110338748B
CN110338748B CN201910513319.7A CN201910513319A CN110338748B CN 110338748 B CN110338748 B CN 110338748B CN 201910513319 A CN201910513319 A CN 201910513319A CN 110338748 B CN110338748 B CN 110338748B
Authority
CN
China
Prior art keywords
data information
information
matrix data
training
sighting target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910513319.7A
Other languages
Chinese (zh)
Other versions
CN110338748A (en
Inventor
毛维波
梅建国
郑定列
骆晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Ming Sing Optical R & D Co ltd
Original Assignee
Ningbo Ming Sing Optical R & D Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Ming Sing Optical R & D Co ltd filed Critical Ningbo Ming Sing Optical R & D Co ltd
Priority to CN201910513319.7A priority Critical patent/CN110338748B/en
Publication of CN110338748A publication Critical patent/CN110338748A/en
Application granted granted Critical
Publication of CN110338748B publication Critical patent/CN110338748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Surgery (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method for quickly positioning a vision value, a storage medium, a terminal and a vision detector; the method and the device solve the problems that the detection can only be finished by indicating the sighting target step by step and the overall detection efficiency is low, and the technical scheme has the key points that the current trigger information of the current user is obtained and comprises sighting target judgment information; comparing and analyzing the visual target judgment information and the corresponding display visual target information to form judgment result information; the invention calls a pre-trained neural network model and feeds back the current visual target judgment information and the current display visual target information at the current moment to the neural network model to analyze and form the next display visual target information.

Description

Method for quickly positioning vision value, storage medium, terminal and vision detector
Technical Field
The invention relates to a vision detector, in particular to a method for quickly positioning a vision value, a storage medium, a terminal and a vision detector.
Background
Most traditional and common visual acuity test devices are light box visual acuity charts, visual acuity projectors, comprehensive optometry instruments, and these traditional visual acuity test devices have the disadvantages that: firstly, the visual targets and the arrangement modes thereof in the visual chart are fixed, and common people can record the visual targets and the arrangement modes thereof after observing for a short time, so that the possibility of cheating exists; secondly, need medical personnel to use the pointer stick to point the sighting mark aside, the person being examined feeds back the sighting mark opening direction who sees to medical personnel, judge through medical personnel comparison again, finally reach the person being examined vision value, this has brought not little work load for medical personnel, also can make mistakes inevitably in the testing process, in order to accomplish the task as early as possible even, measure a few sighting marks and deal with the affairs less, medical personnel also brought certain influence to the accuracy that detects the eyesight and judge the standard.
Along with the appearance and the development of electron visual acuity chart, self-service visual acuity test becomes general gradually, only need to be examined personnel oneself and can accomplish the operation in the time of the inspection eyesight, greatly reduced the human cost, make things convenient for the person of being examined oneself at any time to examine. Generally, after the visual target appears on the visual chart, the detected person indicates the direction by pressing a key on the remote controller, if the visual target in the same file is indicated correctly for many times, the visual target is reduced by one file, otherwise, the visual target is increased by one file.
However, this method is inefficient, and it needs to complete the vision test of the person to be tested through multiple instructions, which affects the user experience, especially when there are many people to be tested and the time requirement is strict, the completion of the test task is affected.
Disclosure of Invention
The first purpose of the present invention is to provide a method for quickly positioning a vision value, which can quickly read and position the vision value of a subject to improve the detection efficiency.
The technical purpose of the invention is realized by the following technical scheme:
a method of rapidly locating a vision value, comprising:
acquiring current trigger information of a current user, wherein the trigger information comprises sighting target judgment information;
comparing and analyzing the visual target judgment information and the corresponding display visual target information to form judgment result information;
and calling a pre-trained neural network model and feeding back current visual target judgment information and current display visual target information at the current moment to the neural network model to analyze and form next display visual target information.
By adopting the scheme, according to the trained neural network model, the corresponding parameter factors are input into the neural network model, namely the current visual target judgment information and the current display visual target information are fed back to the neural network model, and based on the training learning process of the neural network model, the next display visual target information can be fed back accurately, so that the next display visual target information can be relatively close to the visual value of the current detected person, the test frequency is greatly reduced, and the detection efficiency is improved.
Preferably, the method for analysis by the neural network model is as follows:
acquiring current sighting target judging information and current display sighting target information; forming the data corresponding to the current sighting target judging information and the current display sighting target information into the incidence factor matrix data information;
performing data processing on the incidence factor matrix data information according to a preset activation function to form primary matrix data information; randomly setting zero to the first-level matrix data information to form second-level matrix data information;
performing data processing on the second-level matrix data information according to a preset activation function to form third-level matrix data information;
and carrying out data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target information.
By adopting the scheme, through the arrangement of the multilayer neural network model, the neural network model can fully learn data in the training process, and some external influence factors are removed as far as possible so as to improve the accuracy of finally formed current display sighting target information; the current display sighting target information which is closest to the vision value of the current examinee is obtained through prediction, so that the test times are reduced, and the test efficiency is improved.
Preferably, the method for data processing of the data information of the relevance factor matrix with respect to the activation function is as follows:
mapping the first weight matrix data information obtained by training and the associated factor matrix data information according to a Sigmoid function to form first primary mapping data information;
mapping the second weight matrix data information obtained by training and the associated factor matrix data information according to a Sigmoid function to form second primary mapping data information;
performing data processing on the second primary mapping data information and the currently displayed sighting target information to weaken the currently displayed sighting target information and form first weakened display sighting target information;
performing data processing on the first weakening display sighting target information and the sighting target judging information to form first weakening matrix data information;
mapping third weight matrix data information obtained by training and first weakening matrix data information according to a tanh function to form first internal output data information;
according to the weight data corresponding to the first primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the first internal output data information to form primary matrix data information;
the method for processing the data of the secondary matrix data information by the activation function comprises the following steps:
mapping the first weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form third primary mapping data information;
mapping the second weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form fourth primary mapping data information;
performing data processing on the fourth primary mapping data information and the currently displayed sighting target information to weaken the currently displayed sighting target information and form second weakened display sighting target information;
performing data processing on the second weakening display visual target information and the visual target judgment information to form second weakening matrix data information;
mapping third weight matrix data information obtained by training and second weakening matrix data information according to a tanh function to form second internal output data information;
and according to the weight data corresponding to the third primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the second internal output data information to form three-level matrix data information.
Preferably, in the process of forming the first primary mapping data information and the second primary mapping data information through the Sigmoid function, the first weight matrix data information and the second weight matrix data information are respectively processed through the preset random zero-setting matrix data information and then mapped with the associated factor matrix data information.
Preferably, a primary matrix data information formula is formed as follows:
Zt1=σ(M*Wz*[h(t1-1),x(t1-1)]);
rt1=σ(M*Wr*[h(t1-1),x(t1-1)]);
pt1'=tanh(M*W*[rt1*h(t1-1),x(t1-1)]);
pt1=(1-Zt1)*h(t1-1)+Zt1*pt1';
wherein Z ist1Mapping data information for the first primary;
rt1mapping data information for the second primary;
pt1' is a first internal output data information;
pt1primary matrix data information at the moment t;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
w is third weight matrix data information obtained by training;
[h(t1-1),x(t1-1)]the data information is the incidence factor matrix data information;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)judging information for the optotype at the time t-1;
rt1*h(t1-1)displaying the sighting mark information for the first weakening;
[rt1*h(t1-1),x(t1-1)]for the first weakening matrix data information
M is random zero setting matrix data information;
the data information formula of the three-level matrix is formed as follows:
Zt2=σ(M*Wz*[ht1”]);
rt2=σ(M*Wr*[ht1”]);
pt2'=tanh(M*W*[rt2*h(t1-1),x(t1-1)]);
pt2=(1-Zt2)*h(t2-1)+Zt2*pt2';
wherein Z ist2Mapping data information for the third primary;
rt2mapping data information for the fourth primary;
ht1"is the secondary matrix data information at time t;
pt2' is a second internal output data information;
pt2three-level matrix data information at the time t;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
w is third weight matrix data information obtained by training;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)judging information for the optotype at the time t-1;
rt2*h(t1-1)displaying the sighting mark information for the second weakening;
[rt2*h(t1-1),x(t1-1)]for the second weakening matrix data information
And M is random zero setting matrix data information.
By adopting the scheme, the weight ratio of different data is finally formed in the process of training the neural network model, three weight data can be formed according to different requirements in the training process, the mapping of the data is completed through the used Sigmoid function and the tanh function in the data processing process, the processed data is further output, and the subsequent processing is waited; the first weight matrix data information and the second weight matrix data information obtained by training are randomly zeroed through the random zero setting matrix data information to keep the diversity of the data, the phenomenon that the training data belong to the same type and the result obtained by training is too close to the same is avoided, and the accuracy of the neural network model prediction is improved.
Preferably, the learning method for the neural network model is as follows:
generating required training data information according to a binary prediction method of the eye chart;
acquiring training data information and filtering invalid data of the training data information to form filtered data information; the filtering data information comprises current sighting target judging training information and current display sighting target training information;
acquiring current sighting target judging training information and current display sighting target training information; forming the data corresponding to the current sighting target judging training information and the current display sighting target training information into the incidence factor matrix data information;
performing data processing on the incidence factor matrix data information according to a preset activation function to form primary matrix data information; randomly setting zero to the first-level matrix data information to form second-level matrix data information;
performing data processing on the second-level matrix data information according to a preset activation function to form third-level matrix data information;
and carrying out data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target training information.
By adopting the scheme, the semi-prediction method belongs to a method for positioning more quickly, but the method is not accurate enough and the data is formed too singly, so that the form is used as training data, and the neural network is trained through the training data, so that the detection accuracy is higher after the neural network model is trained based on the principle of the semi-prediction method in the final prediction process.
Preferably, the method for forming the training data information is as follows:
define the record of the ith examination as (t)i,si,pi),tiTime interval, s, for indicating direction for visual target to subjectiFor vision values and corresponding to displayed optotype information, piIndicating whether the information is correct or not and corresponding to the judgment result information;
if p isiIf the test result is correct, the predicted vision value of the (i + 1) th check is as follows:
Figure GDA0003399116980000051
if p isiIf the test result is wrong, the predicted vision value of the i +1 th check is as follows:
Figure GDA0003399116980000052
defining the detection after the preset times as one complete detection;
if p appears each time in the detection process of the preset timesiIf the errors are detected, all detection records are used as training data information;
if in the detection process of the preset times, p appears at any timeiAnd if the data is correct, randomly complementing the rest undetected data and combining the detected detection records as training data information.
By adopting the scheme, the detection times are defined, and the data obtained by insufficient detection are complemented randomly so as to ensure that the computer can read related data smoothly for correspondence, avoid the situation that the computer cannot run due to program disorder in the process of reading data and training a neural network model, and improve the stability.
A second object of the present invention is to provide a computer-readable storage medium capable of storing a corresponding program, which can quickly read and locate the vision value of a subject to improve the detection efficiency.
The technical purpose of the invention is realized by the following technical scheme:
a computer-readable storage medium comprising a program which is loadable by a processor and which, when executed, carries out the method of fast localization of vision values as claimed in the preceding claim.
By adopting the scheme, according to the trained neural network model, the corresponding parameter factors are input into the neural network model, namely the current visual target judgment information and the current display visual target information are fed back to the neural network model, and based on the training learning process of the neural network model, the next display visual target information can be fed back accurately, so that the next display visual target information can be relatively close to the visual value of the current detected person, the test frequency is greatly reduced, and the detection efficiency is improved.
A third object of the present invention is to provide a terminal, which can quickly read and position the vision value of a subject to improve the detection efficiency.
The technical purpose of the invention is realized by the following technical scheme:
a terminal comprising a memory, a processor and a program stored on the memory and executable on the processor, the program being capable of being loaded and executed by the processor to perform the method of rapidly localizing a vision value as claimed in the preceding claim.
By adopting the scheme, according to the trained neural network model, the corresponding parameter factors are input into the neural network model, namely the current visual target judgment information and the current display visual target information are fed back to the neural network model, and based on the training learning process of the neural network model, the next display visual target information can be fed back accurately, so that the next display visual target information can be relatively close to the visual value of the current detected person, the test frequency is greatly reduced, and the detection efficiency is improved.
A fourth object of the present invention is to provide a vision tester, which can quickly read and position the vision value of the person to be tested to improve the testing efficiency.
The technical purpose of the invention is realized by the following technical scheme:
a vision testing apparatus comprising a memory, a processor and a program stored on said memory and executable on said processor, the program being capable of being loaded and executed by the processor to perform the method of rapidly locating a vision value as claimed in any preceding claim.
By adopting the scheme, according to the trained neural network model, the corresponding parameter factors are input into the neural network model, namely the current visual target judgment information and the current display visual target information are fed back to the neural network model, and based on the training learning process of the neural network model, the next display visual target information can be fed back accurately, so that the next display visual target information can be relatively close to the visual value of the current detected person, the test frequency is greatly reduced, and the detection efficiency is improved.
In conclusion, the invention has the following beneficial effects: and forming next-time visual target information display according to the neural network model obtained through training, avoiding displaying the visual targets step by step sequentially as far as possible to allow the testee to judge, and improving the overall detection efficiency.
Drawings
FIG. 1 is a block flow diagram of a method of rapidly locating a visual value;
FIG. 2 is a block flow diagram of a method of analysis by a neural network model;
FIG. 3 is a block flow diagram of a method for data processing of the relevance factor matrix data information with respect to an activation function;
fig. 4 is a flow chart diagram of a learning method for a neural network model.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.
The embodiment of the invention provides a method for quickly positioning a vision value, which comprises the following steps: acquiring current trigger information of a current user, wherein the trigger information comprises sighting target judgment information; comparing and analyzing the visual target judgment information and the corresponding display visual target information to form judgment result information; and calling a pre-trained neural network model and feeding back current visual target judgment information and current display visual target information at the current moment to the neural network model to analyze and form next display visual target information.
In the embodiment of the invention, the corresponding parameter factors are input into the neural network model according to the trained neural network model, namely the current visual target judgment information and the current display visual target information are fed back to the neural network model, and the next display visual target information can be fed back more accurately based on the training and learning process of the neural network model, so that the next display visual target information can be relatively close to the visual force value of the current detected person, the test times are greatly reduced, and the detection efficiency is improved.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present invention will be described in further detail with reference to the drawings attached hereto.
The embodiment of the invention provides a method for quickly positioning a vision value, and the main flow of the method is described as follows.
As shown in fig. 1:
step 1000: and acquiring current trigger information of the current user, wherein the trigger information comprises sighting target judgment information.
The current trigger information can be acquired by a mechanical key triggering mode or a virtual key triggering mode; the mechanical key triggering mode can be realized by pressing preset mechanical keys, such as an upper-lower left-right key in a remote controller, an upper-lower left-right key on a keyboard and the like, or by clicking a mouse after a corresponding area is selected by the mouse to acquire current triggering information; in this embodiment, it is preferable to acquire the current trigger information in the form of a remote controller; the virtual key triggering mode can be achieved by pressing the relevant virtual triggering key in the interface of the corresponding software.
Step 2000: and comparing and analyzing the visual target judgment information and the corresponding display visual target information to form judgment result information.
The judgment is carried out according to a preset judgment mode, namely whether the visual target judgment information fed back by the detected person is the same as the currently displayed visual target information or not, if so, the detected person can clearly see the current visual target, and if not, the detected person cannot clearly see the current visual target.
Step 3000: and calling a pre-trained neural network model and feeding back current visual target judgment information and current display visual target information at the current moment to the neural network model to analyze and form next display visual target information.
The method comprises the steps of inputting corresponding parameter factors into a neural network model according to the trained neural network model, feeding current sighting target judging information and current display sighting target information back to the neural network model, and feeding back next display sighting target information accurately based on the training learning process of the neural network model so that the next display sighting target information can be relatively close to the vision value of a current detected person.
As shown in fig. 2, the method for analysis by the neural network model is as follows:
step 3100: acquiring current sighting target judging information and current display sighting target information; and forming the data corresponding to the current sighting mark judging information and the current display sighting mark information into the incidence factor matrix data information.
Wherein, the current sighting mark judging information is a judgment fed back by the current examinee after observing the displayed sighting mark; the currently displayed visual target information is the visual target displayed on the display device. The two factors are correlated to form related correlation factor matrix data information; i.e. can be set to [ h ](t1-1),x(t1-1)],h(t1-1)Displaying visual target information at the time of t-1; x is the number of(t1-1)And the visual target judgment information at the time t-1.
Step 3200: and performing data processing on the incidence factor matrix data information according to the preset activation function to form primary matrix data information.
As shown in fig. 3, the method for performing data processing on the data information of the relevance factor matrix with respect to the activation function is as follows:
step 3210: and mapping the first weight matrix data information obtained by training and the correlation factor matrix data information according to a Sigmoid function to form first primary mapping data information.
Step 3220: and mapping the second weight matrix data information obtained by training and the correlation factor matrix data information according to a Sigmoid function to form second primary mapping data information.
The Sigmoid function is a common biological Sigmoid function, and is also called a Sigmoid growth curve. In the information science, due to the properties of single increment and single increment of an inverse function, a Sigmoid function is often used as a threshold function of a neural network, and variables are mapped to be between 0 and 1.
In the process of forming the first primary mapping data information and the second primary mapping data information through the Sigmoid function, the first weight matrix data information and the second weight matrix data information are respectively processed through the preset random zero-setting matrix data information and then mapped with the incidence factor matrix data information.
The relevant processing formula is as follows:
Zt1=σ(M*Wz*[h(t1-1),x(t1-1)])。
rt1=σ(M*Wr*[h(t1-1),x(t1-1)])。
wherein Z ist1Mapping data information for the first primary;
rt1mapping data information for the second primary;
sigma is a Sigmoid function;
m is random zero setting matrix data information;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)and the visual target judgment information at the time t-1.
Step 3230: and carrying out data processing on the second primary mapping data information and the current display visual target information to weaken the current display visual target information and form first weakened display visual target information.
Step 3240: and performing data processing on the first weakening display sighting target information and the sighting target judging information to form first weakening matrix data information.
Step 3250: and mapping the third weight matrix data information obtained by training with the first weakening matrix data information according to the tanh function to form first internal output data information.
The current display sighting target information is weakened through the second primary mapping data information obtained in the step, and after the primary data processing is finished, corresponding first weakening matrix data information is formed, namely the data information can be set to be [ r [ [ r ]t*h(t-1),x(t-1)]。
the tanh function is one of hyperbolic functions, and tanh () is hyperbolic tangent. In mathematics, the hyperbolic tangent "tanh" is derived from two basic hyperbolic functions, hyperbolic sine and hyperbolic cosine. Function: y is tanh x; domain definition: r, value range: (-1,1). y-tanh x is an odd function whose image of the function is a strictly monotonically increasing curve passing through the origin and crossing the i and iii quadrants, and whose image is limited between two horizontal asymptotes y-1 and y-1.
The relevant processing formula is as follows:
pt1'=tanh(M*W*[rt1*h(t1-1),x(t1-1)]);
wherein p ist1' is a first internal output data information;
pt1primary matrix data information at the moment t;
m is random zero setting matrix data information;
w is third weight matrix data information obtained by training;
rt1mapping data information for the second primary;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)and the visual target judgment information at the time t-1.
Step 3260: and according to the weight data corresponding to the first primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the first internal output data information to form primary matrix data information.
When the required first-level matrix data information can be formed through the above steps, the third-level matrix data information can also be formed through the above method steps.
The method for processing the data of the secondary matrix data information by the activation function comprises the following steps:
mapping the first weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form third primary mapping data information;
mapping the second weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form fourth primary mapping data information;
performing data processing on the fourth primary mapping data information and the currently displayed sighting target information to weaken the currently displayed sighting target information and form second weakened display sighting target information;
performing data processing on the second weakening display visual target information and the visual target judgment information to form second weakening matrix data information;
mapping third weight matrix data information obtained by training and second weakening matrix data information according to a tanh function to form second internal output data information;
and according to the weight data corresponding to the third primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the second internal output data information to form three-level matrix data information.
The data information formula of the three-level matrix is formed as follows:
Zt2=σ(M*Wz*[ht1”]);
rt2=σ(M*Wr*[ht1”]);
pt2'=tanh(M*W*[rt2*h(t1-1),x(t1-1)]);
pt2=(1-Zt2)*h(t2-1)+Zt2*pt2';
wherein Z ist2Mapping data information for the third primary;
rt2mapping data information for the fourth primary;
ht1"is the secondary matrix data information at time t;
pt2' is a second internal output data information;
pt2three-level matrix data information at the time t;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
w is third weight matrix data information obtained by training;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)judging information for the optotype at the time t-1;
rt2*h(t1-1)displaying the sighting mark information for the second weakening;
[rt2*h(t1-1),x(t1-1)]for the second weakening matrix data information
And M is random zero setting matrix data information.
The relevant processing formula is as follows:
ht=(1-Zt)*h(t-1)+Zt*ht';
htdisplaying visual target information at the time t;
Ztmapping data information for the first primary;
h(t-1)displaying visual target information at the time of t-1;
ht' is internal output data information.
The method comprises the steps that weight proportion of different data is finally formed in the process of training a neural network model, three weight data can be formed according to different requirements in the training process, in the data processing process, mapping of the data is completed through a Sigmoid function and a tanh function, the processed data are output, and subsequent processing is waited; the first weight matrix data information and the second weight matrix data information obtained by training are randomly zeroed through the random zero setting matrix data information to keep the diversity of the data, the phenomenon that the training data belong to the same type and the result obtained by training is too close to the same is avoided, and the accuracy of the neural network model prediction is improved.
Step 3300: and randomly setting zero to the primary matrix data information to form secondary matrix data information.
Step 3400: and performing data processing on the secondary matrix data information according to the preset activation function to form tertiary matrix data information.
The three-level matrix data information can be obtained through the same steps as the first-level matrix data information, and can also be obtained through other activation functions.
Step 3500: and carrying out data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target information.
The normalization indication function adopts a softmax function, and is a popularization of a logic function. It can "compress" a K-dimensional vector containing arbitrary real numbers into another K-dimensional real vector such that each element ranges between and the sum of all elements is 1.
Through the arrangement of the multilayer neural network model, the neural network model can fully learn data in the training process, and some external influence factors are removed as much as possible so as to improve the accuracy of the finally formed current display sighting target information; the current display sighting target information which is closest to the vision value of the current examinee is obtained through prediction, so that the test times are reduced, and the test efficiency is improved.
As shown in fig. 4, the learning method for the neural network model is as follows:
step 4100: and generating required training data information according to a binary prediction method of the eye chart.
The method for forming the training data information comprises the following steps:
define the record of the ith examination as (t)i,si,pi),tiTime interval, s, for indicating direction for visual target to subjectiFor vision values and corresponding to displayed optotype information, piIndicating whether the information is correct or not and corresponding to the judgment result information;
if p isiIf the test result is correct, the predicted vision value of the (i + 1) th check is as follows:
Figure GDA0003399116980000121
if p isiIf the test result is wrong, the predicted vision value of the i +1 th check is as follows:
Figure GDA0003399116980000122
defining the detection after the preset times as one complete detection;
if the detection is performed for a preset number of times, each timepiIf the errors are detected, all detection records are used as training data information;
if in the detection process of the preset times, p appears at any timeiAnd if the data is correct, randomly complementing the rest undetected data and combining the detected detection records as training data information.
The detection times are defined, and data obtained by insufficient detection are complemented randomly to ensure that a computer can read related data smoothly to correspond, so that the situation that the computer cannot run due to program disorder in the process of reading data and training a neural network model is avoided, and the stability is improved.
Step 4200: acquiring training data information and filtering invalid data of the training data information to form filtered data information; the filtering data information comprises current sighting target judging training information and current display sighting target training information;
step 4300: acquiring current sighting target judging training information and current display sighting target training information; forming the data corresponding to the current sighting target judging training information and the current display sighting target training information into the incidence factor matrix data information;
step 4400: performing data processing on the incidence factor matrix data information according to a preset activation function to form primary matrix data information;
step 4500: randomly setting zero to the first-level matrix data information to form second-level matrix data information;
step 4600: performing data processing on the second-level matrix data information according to a preset activation function to form third-level matrix data information;
step 4700: and carrying out data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target training information.
The binary prediction method belongs to a method for positioning more quickly, but is not accurate enough and data is formed too singly, so that the form is used as training data, and the neural network is trained through the training data, so that in the final prediction process, the detection accuracy is higher after the training of a neural network model based on the principle of the binary prediction method.
Embodiments of the present invention provide a computer-readable storage medium including instructions that, when loaded and executed by a processor, implement the methods of fig. 1-4. The individual steps described in the flow.
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Based on the same inventive concept, embodiments of the present invention provide a terminal, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and the program is capable of being loaded and executed by the processor to implement fig. 1 to 4. The method for rapidly positioning the visual force value in the process.
Based on the same inventive concept, embodiments of the present invention provide a vision tester, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and the program can be loaded and executed by the processor to implement fig. 1 to 4. The method for rapidly positioning the visual force value in the process.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method and the core idea of the present invention, and should not be construed as limiting the present invention. Those skilled in the art should also appreciate that they can easily conceive of various changes and substitutions within the technical scope of the present disclosure.

Claims (6)

1. A method for rapidly positioning a vision value is characterized by comprising the following steps:
acquiring current trigger information of a current user, wherein the trigger information comprises sighting target judgment information;
comparing and analyzing the visual target judgment information and the corresponding display visual target information to form judgment result information;
calling a pre-trained neural network model and feeding back current visual target judgment information and current display visual target information at the current moment to the neural network model to analyze and form next display visual target information;
the method for analysis by the neural network model is as follows:
acquiring current sighting target judging information and current display sighting target information; forming the data corresponding to the current sighting target judging information and the current display sighting target information into the incidence factor matrix data information;
performing data processing on the incidence factor matrix data information according to a preset activation function to form primary matrix data information;
randomly setting zero to the first-level matrix data information to form second-level matrix data information;
performing data processing on the second-level matrix data information according to a preset activation function to form third-level matrix data information;
performing data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target information;
the method for processing the data information of the incidence factor matrix by the activation function comprises the following steps:
mapping the first weight matrix data information obtained by training and the associated factor matrix data information according to a Sigmoid function to form first primary mapping data information;
mapping the second weight matrix data information obtained by training and the associated factor matrix data information according to a Sigmoid function to form second primary mapping data information;
performing data processing on the second primary mapping data information and the currently displayed sighting target information to weaken the currently displayed sighting target information and form first weakened display sighting target information;
performing data processing on the first weakening display sighting target information and the sighting target judging information to form first weakening matrix data information;
mapping third weight matrix data information obtained by training and first weakening matrix data information according to a tanh function to form first internal output data information;
according to the weight data corresponding to the first primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the first internal output data information to form primary matrix data information;
the method for processing the data of the secondary matrix data information by the activation function comprises the following steps:
mapping the first weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form third primary mapping data information;
mapping the second weight matrix data information obtained by training and the secondary matrix data information according to a Sigmoid function to form fourth primary mapping data information;
performing data processing on the fourth primary mapping data information and the currently displayed sighting target information to weaken the currently displayed sighting target information and form second weakened display sighting target information;
performing data processing on the second weakening display visual target information and the visual target judgment information to form second weakening matrix data information;
mapping third weight matrix data information obtained by training and second weakening matrix data information according to a tanh function to form second internal output data information;
and according to the weight data corresponding to the third primary mapping data information, different weights are sequentially distributed to the current display sighting target information and the second internal output data information to form three-level matrix data information.
2. The method for rapidly positioning visual force values as claimed in claim 1, wherein in the process of forming the first primary mapping data information and the second primary mapping data information by the Sigmoid function, the first weight matrix data information and the second weight matrix data information are mapped with the association factor matrix data information after being respectively processed by the preset random zero-setting matrix data information.
3. The method for rapidly locating a visual force value according to claim 2, wherein a primary matrix data information formula is formed as follows:
Zt1=σ(M*Wz*[h(t1-1),x(t1-1)]);
rt1=σ(M*Wr*[h(t1-1),x(t1-1)]);
pt1 '=tanh(M*W*[rt1*h(t1-1),x(t1-1)]);
pt1=(1-Zt1)*h(t1-1)+Zt1*pt1 '
wherein Z ist1Mapping data information for the first primary;
rt1mapping data information for the second primary;
pt1 'outputting data information for the first internal part;
pt1primary matrix data information at the moment t;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
w is third weight matrix data information obtained by training;
[h(t1-1),x(t1-1)]the data information is the incidence factor matrix data information;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)judging information for the optotype at the time t-1;
rt1*h(t1-1)displaying the sighting mark information for the first weakening;
[rt1*h(t1-1),x(t1-1)]for the first weakening matrix data information
M is random zero setting matrix data information;
the data information formula of the three-level matrix is formed as follows:
Zt2=σ(M*Wz*[ht1 '']);
rt2=σ(M*Wr*[ ht1 '']);
pt2 '=tanh(M*W*[rt2*h(t1-1),x(t1-1)]);
pt2=(1-Zt2)*h(t1-1)+Zt2*pt2 '
wherein Z ist2Mapping data information for the third primary;
rt2mapping data information for the fourth primary;
ht1 ''the second-level matrix data information at the time t;
pt2 'outputting data information for the second internal part;
pt2three-level matrix data information at the time t;
Wzfirst weight matrix data information obtained for training;
Wrsecond weight matrix data information obtained for training;
w is third weight matrix data information obtained by training;
h(t1-1)displaying visual target information at the time of t-1;
x(t1-1)judging information for the optotype at the time t-1;
rt2*h(t1-1)displaying the sighting mark information for the second weakening;
[rt2*h(t1-1),x(t1-1)]is the second weakening matrix data information;
and M is random zero setting matrix data information.
4. The method for rapidly locating a vision value as in claim 1,
the learning method for the neural network model is as follows:
generating required training data information according to a binary prediction method of the eye chart;
acquiring training data information and filtering invalid data of the training data information to form filtered data information; the filtering data information comprises current sighting target judging training information and current display sighting target training information;
acquiring current sighting target judging training information and current display sighting target training information; forming the data corresponding to the current sighting target judging training information and the current display sighting target training information into the incidence factor matrix data information;
performing data processing on the incidence factor matrix data information according to a preset activation function to form primary matrix data information;
randomly setting zero to the first-level matrix data information to form second-level matrix data information;
performing data processing on the second-level matrix data information according to a preset activation function to form third-level matrix data information;
performing data processing on the three-level matrix data information according to a preset normalization index function to form next display sighting target training information;
the method for forming the training data information is as follows:
define the record of the ith examination as (t)i,si,pi),tiTime interval, s, for indicating direction for visual target to subjectiFor vision values and corresponding to displayed optotype information, piTo indicateWhether the judgment result is correct or not and corresponds to the judgment result information;
if p isiIf the test result is correct, the predicted vision value of the (i + 1) th check is as follows:
Figure 308178DEST_PATH_IMAGE001
=
Figure 653709DEST_PATH_IMAGE002
if p isiIf the test result is wrong, the predicted vision value of the i +1 th check is as follows:
Figure 272909DEST_PATH_IMAGE001
=
Figure 602259DEST_PATH_IMAGE003
defining the detection after the preset times as one complete detection;
if p appears each time in the detection process of the preset timesiIf the errors are detected, all detection records are used as training data information;
if in the detection process of the preset times, p appears at any timeiAnd if the data is correct, randomly complementing the rest undetected data and combining the detected detection records as training data information.
5. A computer-readable storage medium, comprising a program which is loadable by a processor and which, when executed, carries out the method of fast localization of vision values according to any of claims 1 to 4.
6. A terminal comprising a memory, a processor and a program stored on the memory and executable on the processor, the program being capable of being loaded and executed by the processor to perform the method of rapidly localizing visual values as claimed in any one of claims 1 to 4.
CN201910513319.7A 2019-06-13 2019-06-13 Method for quickly positioning vision value, storage medium, terminal and vision detector Active CN110338748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910513319.7A CN110338748B (en) 2019-06-13 2019-06-13 Method for quickly positioning vision value, storage medium, terminal and vision detector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910513319.7A CN110338748B (en) 2019-06-13 2019-06-13 Method for quickly positioning vision value, storage medium, terminal and vision detector

Publications (2)

Publication Number Publication Date
CN110338748A CN110338748A (en) 2019-10-18
CN110338748B true CN110338748B (en) 2022-03-08

Family

ID=68181976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910513319.7A Active CN110338748B (en) 2019-06-13 2019-06-13 Method for quickly positioning vision value, storage medium, terminal and vision detector

Country Status (1)

Country Link
CN (1) CN110338748B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111265182A (en) * 2020-01-21 2020-06-12 李小丹 AI remote optometry service platform and optometry equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002039754A1 (en) * 2000-11-08 2002-05-16 Andrzej Czyzewski Visual screening tests by means of computers
CN102599879A (en) * 2012-02-23 2012-07-25 天津理工大学 Self-adaptive eyesight test intelligent system and eyesight test method
US9277857B1 (en) * 2014-11-06 2016-03-08 Bertec Corporation System for testing and/or training the vision of a subject
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN106537290A (en) * 2014-05-09 2017-03-22 谷歌公司 Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN106778597A (en) * 2016-12-12 2017-05-31 朱明� Intellectual vision measurer based on graphical analysis
CN107198505A (en) * 2017-04-07 2017-09-26 天津市天中依脉科技开发有限公司 Visual function detecting system and method based on smart mobile phone
CN107358036A (en) * 2017-06-30 2017-11-17 北京机器之声科技有限公司 A kind of child myopia Risk Forecast Method, apparatus and system
CN107411700A (en) * 2017-04-07 2017-12-01 天津大学 A kind of hand-held vision inspection system and method
CN109285602A (en) * 2017-07-19 2019-01-29 索尼公司 Main module, system and method for self-examination eyes of user

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6471403B2 (en) * 2013-01-31 2019-02-20 株式会社ニデック Optometry equipment
JP6626057B2 (en) * 2017-09-27 2019-12-25 ファナック株式会社 Inspection device and inspection system
JP7008815B2 (en) * 2017-10-31 2022-01-25 ウェルチ・アリン・インコーポレーテッド Vision test

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002039754A1 (en) * 2000-11-08 2002-05-16 Andrzej Czyzewski Visual screening tests by means of computers
CN102599879A (en) * 2012-02-23 2012-07-25 天津理工大学 Self-adaptive eyesight test intelligent system and eyesight test method
CN106537290A (en) * 2014-05-09 2017-03-22 谷歌公司 Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US9277857B1 (en) * 2014-11-06 2016-03-08 Bertec Corporation System for testing and/or training the vision of a subject
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN106778597A (en) * 2016-12-12 2017-05-31 朱明� Intellectual vision measurer based on graphical analysis
CN107198505A (en) * 2017-04-07 2017-09-26 天津市天中依脉科技开发有限公司 Visual function detecting system and method based on smart mobile phone
CN107411700A (en) * 2017-04-07 2017-12-01 天津大学 A kind of hand-held vision inspection system and method
CN107358036A (en) * 2017-06-30 2017-11-17 北京机器之声科技有限公司 A kind of child myopia Risk Forecast Method, apparatus and system
CN109285602A (en) * 2017-07-19 2019-01-29 索尼公司 Main module, system and method for self-examination eyes of user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"The dynamics of practice effects in an optotype acuity task";Sven P. Heinrich et al.;《BASIC SCIENCE》;20110421;全文 *

Also Published As

Publication number Publication date
CN110338748A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
Bolt A Monte Carlo comparison of parametric and nonparametric polytomous DIF detection methods
TWI543744B (en) Adaptive visual performance testing system
Finch Multidimensional item response theory parameter estimation with nonsimple structure items
Ferrando et al. Detecting dissimulation in personality test scores: A comparison between person-fit indices and detection scales
Raju et al. The item parameter replication method for detecting differential functioning in the polytomous DFIT framework
Preston et al. Detecting faulty within-item category functioning with the nominal response model
CN110338748B (en) Method for quickly positioning vision value, storage medium, terminal and vision detector
Berger Attainment of skill in using science processes. I. Instrumentation, methodology and analysis
CN108447047A (en) Acid-fast bacilli detection method and device
CN109102888A (en) A kind of human health methods of marking
CN116350203B (en) Physical testing data processing method and system
CN116975558A (en) Calculation thinking evaluation method based on multi-dimensional project reaction theory
CN110096708A (en) A kind of determining method and device of calibration collection
CN115511454A (en) Method and device for generating audit rules and related products
CN114489760A (en) Code quality evaluation method and code quality evaluation device
CN111370095A (en) IC card based traditional Chinese medicine constitution distinguishing and conditioning test system and test method
Hill Two models for longitudinal item response data
Rogel et al. Global and partial agreement among several observers
CN113990452B (en) Evaluation method and system based on psychological literacy and readable storage medium
CN106096219B (en) A kind of Data Quality Analysis method for the evaluation of fruit and vegetable recognition algorithm performance
Snow et al. A comparison of unidimensional and three-dimensional differential item functioning analysis using two-dimensional data
Jennings et al. Evaluation of model-data fit by comparing parametric and nonparametric item response functions: Application of a Tukey-Hann Procedure
Rönkkö et al. Use of partial least squares as a theory testing tool–an analysis of information systems papers
CN113436712B (en) Evaluation management system for intelligent medical cloud service platform
Hurtz et al. Expanding the Lognormal Response Time Model Using Profile Similarity Metrics to Improve the Detection of Anomalous Testing Behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant