CN115054198B - Remote intelligent vision detection method, system and device - Google Patents

Remote intelligent vision detection method, system and device Download PDF

Info

Publication number
CN115054198B
CN115054198B CN202210657207.0A CN202210657207A CN115054198B CN 115054198 B CN115054198 B CN 115054198B CN 202210657207 A CN202210657207 A CN 202210657207A CN 115054198 B CN115054198 B CN 115054198B
Authority
CN
China
Prior art keywords
detection
user
instruction
user side
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210657207.0A
Other languages
Chinese (zh)
Other versions
CN115054198A (en
Inventor
伍卫东
项道满
孟晶
刘小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Vision Optical Technology Co ltd
Original Assignee
Guangzhou Vision Optical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Vision Optical Technology Co ltd filed Critical Guangzhou Vision Optical Technology Co ltd
Priority to CN202210657207.0A priority Critical patent/CN115054198B/en
Publication of CN115054198A publication Critical patent/CN115054198A/en
Application granted granted Critical
Publication of CN115054198B publication Critical patent/CN115054198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a remote intelligent vision detection method, a system and a device, wherein the method comprises the following steps: s1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction; s2: retrieving corresponding historical detection data at the vision detection service platform based on the user type of the current user; s3: generating a corresponding detection plan based on the historical detection data and the operation instruction; s4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user side; s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained; s6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list; the personalized detection plan is generated through network communication, and storage, updating and checking of detection results are realized on the cloud service platform, so that network intellectualization of a detection process is realized, and detection efficiency and accuracy are improved.

Description

Remote intelligent vision detection method, system and device
Technical Field
The invention relates to the technical field of intelligent detection, in particular to a remote intelligent vision detection method, system and device.
Background
At present, the traditional vision test requires a professional to consult the test history of a user, then deduces an approximate test range according to professional experience, and the professional adopts the deduced approximate test range to perform manual indication test, so that the test history is incomplete or a great error can be generated due to memory deviation or other factors of the user, and a more accurate personalized test plan can not be generated, the whole process of the vision test is low in efficiency and long in time consumption, and the accuracy of the vision test result is not high enough due to the manual indication test mode; the traditional vision inspection service does not have one-to-one data storage service for users, and is unfavorable for the storage, updating and viewing of inspection data of the users.
Accordingly, the present invention is directed to a method, system, and apparatus for remote intelligent vision testing.
Disclosure of Invention
The invention provides a remote intelligent vision detection method, a system and a device, which are used for generating an individualized detection plan through network communication, realizing storage, updating and checking of detection results on a cloud service platform, realizing the intellectualization of a detection process and improving the detection efficiency and accuracy.
The invention provides a remote intelligent vision detection method, which comprises the following steps:
S1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data at the vision detection service platform based on the user type of the current user;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list.
Preferably, the remote intelligent vision testing method, S1: the user side obtains the login authority of the vision testing service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user terminal, wherein the first user type comprises: medical care end, user end, parent end, second user type includes: history users, new users;
determining a corresponding login recommended mode based on the first user feedback instruction, and displaying the login recommended mode to a login mode selection page;
Receiving a login mode selection feedback instruction input by the user side;
determining a login mode corresponding to the user terminal based on the login mode selection feedback instruction;
determining a corresponding verification mode based on the second user type feedback instruction;
generating a corresponding login page based on the verification mode and the login mode, and jumping the display page of the user side to the login page;
receiving primary login information input by the user side from the login page, verifying whether the primary login information is correct, and if yes, sending an authority acquisition success instruction to the user side;
otherwise, sending a permission acquisition failure instruction to the user terminal, and simultaneously, sending a user type secondary confirmation instruction to the user terminal;
determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction;
generating a secondary login page based on the secondary login mode and the secondary verification mode, and jumping the display page of the user side to the secondary login page;
receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct, and if yes, sending a right acquisition success instruction to the user side;
Otherwise, sending a failure instruction of permission secondary acquisition to the user terminal, and simultaneously, jumping a display page of the user terminal to a new user registration page;
receiving new user registration information input by the user terminal from the new user registration page, storing the new user registration information into a user information verification library, and sending a right acquisition success instruction to the user terminal;
and when the user side authority is successfully acquired, acquiring an operation instruction input by the user side.
Preferably, the remote intelligent vision detection method obtains an operation instruction input by the user terminal, including:
when the user side authority is successfully obtained or the new user is successfully registered, sending a detection type selection instruction and a detection range selection instruction to the user side;
and receiving a detection type selection feedback instruction and a detection range selection feedback instruction which are input by the user terminal.
Preferably, the remote intelligent vision testing method, S2: searching corresponding historical detection data in the vision detection service platform based on the user type of the current user to obtain a search result, wherein the search result comprises the following steps:
s201: when the user side authority is successfully obtained and the second user type of the user side is a history user, obtaining login information corresponding to the user side;
S202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information in the vision detection service platform based on the search term chain;
s204: screening historical detection data of a corresponding detection type from the historical detection information based on the detection type feedback instruction;
wherein, the login information includes: primary login information and secondary login information.
Preferably, the remote intelligent vision testing method, S3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including:
analyzing the history detection data to obtain corresponding detection time intervals and detection data fluctuation between adjacent history detection data;
based on each detection time interval and the corresponding detection data fluctuation, obtaining a fluctuation relation coefficient corresponding to each detection time interval;
obtaining a corresponding fluctuation amplitude relation coefficient fitting curve based on the fluctuation amplitude relation coefficient;
determining the latest time interval from the latest detection time to the current time in the historical detection data;
predicting a corresponding latest fluctuation range relation coefficient range based on the fluctuation range relation coefficient fitting curve and the latest time interval;
Determining the latest detection data fluctuation range based on the latest fluctuation relation coefficient range and the latest time interval;
determining a first detection range based on the latest detection data fluctuation range and the latest detection data in the historical detection data;
analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range;
judging whether the first detection range comprises the user selection detection range, if so, taking the first detection range as an initial detection range;
otherwise, taking the combined set of the first detection range and the user-selected detection range as an initial detection range;
and generating a corresponding detection plan based on the initial detection range and the corresponding detection category.
Preferably, the remote intelligent vision testing method generates a corresponding testing plan based on the initial testing range and the corresponding testing category, and includes:
determining a corresponding photometric range and a measurement distance based on the corresponding detection category;
generating a corresponding detection word sequence table based on the initial detection range;
generating a corresponding first detection plan based on the luminosity range and the measured distance and the detection word sequence list;
Generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
receiving a confirmation feedback instruction input by the user side;
if the confirmation feedback instruction is confirmation detection, the first detection plan is used as a corresponding detection plan;
and if the confirmation feedback instruction is the adjustment detection range or the adjustment detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until the feedback confirmation instruction input by the user side is received.
Preferably, the remote intelligent vision testing method, S4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user terminal, wherein the method comprises the following steps:
s401: generating a corresponding voice prompt list based on the detection word sequence list and the feedback receiving time;
s402: and broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side.
Preferably, the remote intelligent vision testing method, S5: and adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, wherein the method comprises the following steps of:
analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the corresponding detection word direction, if so, continuing broadcasting the prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, re-broadcasting the latest broadcasted prompt voice and receiving a secondary direction feedback instruction;
judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the corresponding detection word direction, if so, continuing broadcasting prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection word corresponding to the latest broadcasted prompt voice.
Preferably, the remote intelligent vision testing method, S6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
storing the initial detection result and the current detection time into historical detection data corresponding to the corresponding detection type in the vision detection service platform;
Based on the fee confirmation list finally sent to the user side, generating a corresponding fee paying instruction and sending the fee paying instruction to the user side;
all the history detection data of the user side are called, the history detection data are stored in a structuring mode according to detection types, and an omnibearing detection list corresponding to the user side is generated;
analyzing the historical detection data to determine the risk coefficient of the corresponding detection type;
and screening out corresponding recommended items from an item library based on the detection types and the corresponding risk coefficients, generating a corresponding intelligent recommended list based on all the recommended items, and sending the intelligent recommended list to the user side.
Preferably, a remote intelligent vision testing system comprises:
the input module is used for the user side to acquire the login permission of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data from the vision detection service platform based on the user type of the current user;
the generation module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
the execution module executes the detection plan and receives a direction feedback instruction input by the user terminal;
The adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
and the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list.
Preferably, a remote intelligent vision testing apparatus is configured to implement the remote intelligent vision testing method, and the remote intelligent vision testing apparatus includes:
the display screen is arranged on the front surface of the user side shell, and four sides of the display screen are wrapped by the user side shell;
the surface of the user side shell is also provided with a position detection device, a control button and a voice interaction device, and the position detection device, the control button and the voice interaction device are all positioned below the display screen;
the position detection device is used for detecting position information between the user side shell and a user;
the user side shell is provided with a position adjusting device, the position adjusting device is arranged on the rear surface of the user side shell and used for adjusting the position between the user side shell and a user according to the position information between the user side shell and the user detected by the detecting device, and the position adjusting device comprises:
The device comprises a fixed plate, a movable plate, a motor, an output shaft, a first transmission shaft, a first belt pulley, a first gear, a clutch, a first lead screw, a movable block, a fixed sleeve, a support plate, a first connecting rod, a second connecting rod, a chute, a second lead screw, a fixed block, a second transmission shaft, a second belt pulley, a synchronous belt, a second gear and a third transmission shaft;
the motor is fixed on the moving plate, the output end of the motor is fixed with one end of the output shaft, the other end of the output shaft is fixed with a first belt wheel, and the first belt wheel is connected with a second belt wheel through a synchronous belt;
one end of the first transmission shaft is fixed with the first belt wheel, a first gear is fixed at the other end of the first transmission shaft, the first gear is meshed with the second gear, the second gear is fixed at one end of the third transmission shaft, and the other end of the third transmission shaft is connected with one end of the first screw rod through a clutch;
the movable block is sleeved on the first screw rod and is in threaded connection with the first screw rod, the other end of the first screw rod is connected with the supporting plate, the supporting plate is fixed on the movable plate, the fixed sleeve is rotatably sleeved on the other end of the first screw rod, and the fixed sleeve is fixed with the supporting plate;
One end of the first connecting rod is rotationally connected with the moving block, the other end of the first connecting rod is slidingly connected with the fixed plate, one end of the second connecting rod is rotationally connected with the fixed sleeve, and the other end of the second connecting rod is rotationally connected with the fixed plate;
the user side shell is fixed with the fixed plate;
one end of the second transmission shaft is fixed on the second belt wheel, the other end of the second transmission shaft is connected with a second lead screw through a clutch, and the other end of the second lead screw is connected with the supporting plate;
the movable plate is provided with a sliding groove, the fixed block penetrates through the sliding groove, the fixed block is sleeved on the second screw rod, and the fixed block is in threaded connection with the second screw rod.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a remote intelligent vision testing method in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for remote intelligent vision testing in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of a further method for remote intelligent vision testing in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a remote intelligent vision testing system in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of a remote intelligent vision testing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic front view of an adjusting device of a remote intelligent vision testing device according to an embodiment of the present invention;
FIG. 7 is a schematic side view of an adjusting device of a remote intelligent vision testing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic side view of an adjusting device of a remote intelligent vision testing apparatus according to another embodiment of the present invention.
In the figure: 1. a user side housing; 2. a display screen; 3. a detection device; 4. controlling the keys; 5. a voice interaction device; 6. a fixing plate; 7. a moving plate; 8. a motor; 9. an output shaft; 10. a first drive shaft; 11. a first pulley; 12. a first gear; 13. a clutch; 14. a first lead screw; 15. a moving block; 16. a fixed sleeve; 17. a support plate; 18. a first link; 19. a second link; 20. a chute; 21. a second lead screw; 22. a fixed block; 23. a second drive shaft; 24. a second pulley; 25. a synchronous belt; 26. a second gear; 27. and a third transmission shaft.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1:
the invention provides a remote intelligent vision detection method, which comprises the following steps of:
s1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data at the vision detection service platform based on the user type of the current user;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list.
In this embodiment, the vision detection service platform is a platform for providing detection service for the user terminal, and is also a platform for storing user history detection data.
In this embodiment, the login authority is the authority of the user side to successfully login to the vision inspection service platform.
In this embodiment, the operation instructions include: the detection type selection feedback instruction and the detection range selection feedback instruction are used for determining the current user vision detection type and detection range.
In this embodiment, the user types include: historical users, new users.
In this embodiment, the history detection data is the current user's vision detection history data.
In this embodiment, the detection plan is a personalized detection plan generated based on the historical detection data and the operation instruction input by the user side, and includes a detection order of the detection words and an indication voice corresponding to each detection word included in the detection plan.
In this embodiment, the direction feedback instruction is a detected word direction fed back by the user terminal and received after each broadcast instruction voice input by the user terminal.
In this embodiment, the initial detection result is a vision detection result generated based on the direction feedback instruction input by the user obtained after the execution of the detection plan.
In this embodiment, the payment instruction is an instruction for reminding the user to pay the fee, which includes the fee confirmation list corresponding to the detection item.
In this embodiment, the intelligent recommendation list is a list of recommended items retrieved from the item library based on analysis of the initial detection result and the historical detection data.
The beneficial effects of the technology are as follows: according to the invention, a user inputs an operation instruction through the network login vision detection service platform, a personalized detection plan is generated based on the input operation instruction and the user history detection data stored by the vision detection service platform, the user is prompted to input a direction feedback instruction through voice prompt, the language transmission in the traditional vision detection process is converted into a network communication mode, the detection plan can be adjusted in real time according to the direction feedback instruction fed back by the user, the detection efficiency is greatly improved, the time consumption in the detection process is reduced, the accuracy of the vision detection process is also ensured, the vision detection result and the history detection data are stored, updated and checked in the network cloud service platform, and the intellectualization of the detection process is realized.
Example 2:
based on the embodiment 1, the remote intelligent vision testing method, S1: the user side obtains the login authority of the vision testing service platform and inputs an operation instruction, and the method comprises the following steps:
Receiving a first user type feedback instruction and a second user type feedback instruction input by the user terminal, wherein the first user type comprises: medical care end, user end, parent end, second user type includes: history users, new users; determining a corresponding login recommended mode based on the first user feedback instruction, and displaying the login recommended mode to a login mode selection page; receiving a login mode selection feedback instruction input by the user side; determining a login mode corresponding to the user terminal based on the login mode selection feedback instruction; determining a corresponding verification mode based on the second user type feedback instruction; generating a corresponding login page based on the verification mode and the login mode, and jumping the display page of the user side to the login page; receiving primary login information input by the user side from the login page, verifying whether the primary login information is correct, and if yes, sending an authority acquisition success instruction to the user side; otherwise, sending a permission acquisition failure instruction to the user terminal, and simultaneously, sending a user type secondary confirmation instruction to the user terminal; determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction; generating a secondary login page based on the secondary login mode and the secondary verification mode, and jumping the display page of the user side to the secondary login page; receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct, and if yes, sending a right acquisition success instruction to the user side; otherwise, sending a failure instruction of permission secondary acquisition to the user terminal, and simultaneously, jumping a display page of the user terminal to a new user registration page; receiving new user registration information input by the user terminal from the new user registration page, storing the new user registration information into a user information verification library, and sending a right acquisition success instruction to the user terminal; and when the user side authority is successfully acquired, acquiring an operation instruction input by the user side.
In this embodiment, the first user type feedback instruction is an instruction for feeding back the first user type of the current user.
In this embodiment, the second user type feedback instruction is an instruction for feeding back the second user type of the current user.
In this embodiment, the login recommendation mode is a login mode suitable for the current user, which is determined based on the first user type of the current user, for example, there are: account password login, face recognition login and the like.
In this embodiment, the login mode selection page is a network page used for displaying login mode options in the vision inspection service platform, and is also a network page for receiving a login mode selection feedback instruction input by a user.
In this embodiment, the login mode selection feedback instruction is an instruction indicating a login mode selected by the user.
In this embodiment, the login mode is a mode that the user logs in to the vision inspection service platform.
In this embodiment, the authentication mode is an authentication (security authentication) mode when the user logs in the vision inspection service platform.
In this embodiment, the login page is a network page generated based on a login mode and a verification mode for the user to log in to the vision detection service platform.
In this embodiment, the first login information is login information that is first input by the user terminal on the login page.
In this embodiment, verifying whether the primary login information is correct includes: determining a login primary key word in the primary login information, searching whether user information corresponding to the login primary key word exists in a user information base, if so, judging whether other information except the login primary key word in the primary login information is consistent with the searched user information, and if so, judging that the primary login information is correct; otherwise, the primary login information is judged to be incorrect.
In this embodiment, the right acquisition success instruction is an instruction for prompting that the current user has successfully logged in to the vision inspection service platform.
In this embodiment, the permission acquisition failure instruction is an instruction for prompting the current user to unsuccessfully log in to enter the vision detection service platform.
In this embodiment, the user type secondary confirmation instruction is an instruction for performing secondary confirmation on the first user type and the second user type of the user.
In this embodiment, the second login mode is a mode of logging in the vision detection service platform by the user terminal determined for the second time.
In this embodiment, the second verification mode is an authentication (security verification) mode when the user side determined for the second time logs in the vision inspection service platform.
In this embodiment, the secondary login page is a network page generated based on the secondary login mode and the secondary verification mode for the user to log in to the vision detection service platform.
In this embodiment, the second login information is login information input by the user terminal on the login page for the second time.
In this embodiment, verifying whether the second login information is correct includes: determining a login primary key word in the secondary login information, searching whether user information corresponding to the login primary key word exists in a user information base, if so, judging whether other information except the login primary key word in the secondary login information is consistent with the searched user information, and if so, judging that the secondary login information is correct; otherwise, the secondary login information is judged to be incorrect.
In this embodiment, the new user registration page is a web page for registering a new user entering the vision inspection service platform.
In this embodiment, the new user registration information is the new user information input by the user end received on the new user registration page.
In this embodiment, the user information verification library is a database in the vision inspection service platform for storing user information.
The beneficial effects of the technology are as follows: the user side enters the vision detection service platform by acquiring the login permission of the vision detection service platform, so that the information security of the user is ensured, the user type of the user is also confirmed, and a basis is provided for subsequent retrieval of historical detection data and generation of a personalized detection plan.
Example 3:
based on embodiment 2, the remote intelligent vision testing method obtains the operation instruction input by the user terminal, including:
when the user side authority is successfully obtained or the new user is successfully registered, sending a detection type selection instruction and a detection range selection instruction to the user side;
and receiving a detection type selection feedback instruction and a detection range selection feedback instruction which are input by the user terminal.
In this embodiment, the detection category selection instruction is used to prompt the current user to select a detection category.
In this embodiment, the detection range selection instruction is used to prompt the current user to select the detection range.
In this embodiment, the detection category selection feedback instruction is an instruction indicating the detection category selected by the current user.
In this embodiment, the detection range selection feedback instruction is an instruction for indicating the detection range selected by the current user.
The beneficial effects of the technology are as follows: by receiving the detection type selection feedback instruction and the detection range selection feedback instruction input by the user terminal, the type and the corresponding detection range which the user wants to detect can be known, so that the generated personalized detection plan considers the requirements of the user, and the generated personalized detection plan is more humanized and comprehensive.
Example 4:
based on embodiment 3, the remote intelligent vision testing method, S2: based on the user type of the current user, searching corresponding historical detection data in the vision detection service platform to obtain a searching result, referring to fig. 2, the method comprises the following steps:
s201: when the user side authority is successfully obtained and the second user type of the user side is a history user, obtaining login information corresponding to the user side;
s202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information in the vision detection service platform based on the search term chain;
S204: screening historical detection data of a corresponding detection type from the historical detection information based on the detection type feedback instruction;
wherein, the login information includes: primary login information and secondary login information.
In this embodiment, the final login information is the login information input by the user last time.
In this embodiment, the term chain is a keyword in the final login information, which is used for searching from the vision testing service platform, for example: name, identification number, etc.
In this embodiment, the history detection information is relevant information corresponding to the vision test that the current user has done, for example: vision testing history time, vision testing history type, vision testing history result.
In this embodiment, the history detection data is the vision detection result of the corresponding detection type that the current user has done.
The beneficial effects of the technology are as follows: the historical detection data of the corresponding detection type is retrieved from the vision detection service platform based on the login information of the user, so that the defect of a traditional mode of acquiring the historical detection data of the user by consulting or inquiring paper materials is overcome, and a foundation is provided for the subsequent generation of the personalized detection plan corresponding to the current user.
Example 5:
based on the embodiment 3, the remote intelligent vision testing method, S3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including: analyzing the history detection data to obtain corresponding detection time intervals and detection data fluctuation between adjacent history detection data; based on each detection time interval and the corresponding detection data fluctuation, obtaining a fluctuation relation coefficient corresponding to each detection time interval; obtaining a corresponding fluctuation amplitude relation coefficient fitting curve based on the fluctuation amplitude relation coefficient; determining the latest time interval from the latest detection time to the current time in the historical detection data; predicting a corresponding latest fluctuation range relation coefficient range based on the fluctuation range relation coefficient fitting curve and the latest time interval; determining the latest detection data fluctuation range based on the latest fluctuation relation coefficient range and the latest time interval; determining a first detection range based on the latest detection data fluctuation range and the latest detection data in the historical detection data; analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range; judging whether the first detection range comprises the user selection detection range, if so, taking the first detection range as an initial detection range; otherwise, taking the combined set of the first detection range and the user-selected detection range as an initial detection range; and generating a corresponding detection plan based on the initial detection range and the corresponding detection category.
In this embodiment, the detection time interval is the interval between two detection times.
In this embodiment, the detected data amplitude is the amplitude between the corresponding detected data and the corresponding last detected data.
In this embodiment, the fluctuation range relation coefficient is the ratio of the corresponding detected data fluctuation range to the corresponding detected time interval.
In this embodiment, the curve fitted by the expansion coefficient is a curve formed by fitting the expansion coefficient corresponding to each detection based on the corresponding detection type.
In this embodiment, the latest time interval is the time interval between the latest detection time in the historical detection data and the current time.
In this embodiment, predicting the corresponding latest amplitude relation coefficient range based on the amplitude relation coefficient fitting curve and the latest time interval includes:
acquiring the slope of a fluctuation range relation coefficient fitting curve, and determining the latest standard fluctuation range relation coefficient based on the slope, the latest time interval and the latest data on the fluctuation range relation coefficient fitting curve, wherein the latest fluctuation range relation coefficient range is [ y-a, y+a ], a is the fluctuation range, and y is the latest data on the fluctuation range relation coefficient fitting curve;
the calculation formula of the fluctuation range is as follows:
Wherein a is the fluctuation range; y is the latest data on the expansion relation coefficient fitting curve, y m Is the latest standard fluctuation relation coefficient;
for example, y m 0.9, y is 0.7, then a is 0.1.
In this embodiment, the first detection range is determined based on the latest detection data expansion range and the latest detection data in the historical detection data, that is, the latest detection data plus the latest detection data expansion range is the first detection range.
In this embodiment, the user-selected detection range is a detection range of a corresponding detection category selected by the user, for example: myopia is detected at 200 to 400 degrees.
In this embodiment, the initial detection range is a detection range of a corresponding detection type generated based on the history detection data and the operation instruction.
The beneficial effects of the technology are as follows: the relation coefficient between the fluctuation amplitude and the time interval of the detection data of the winning detection category is obtained by analyzing the historical detection data, a first detection range can be accurately predicted based on the relation coefficient, the predicted first detection range can comprise the current possible vision detection result of the user, the detection range is greatly reduced, the detection range is comprehensively considered with the user selection detection range determined by the first detection range and the operation instruction to generate a corresponding detection plan, and the simplification of the detection range is ensured, and the requirements of customers are fully considered.
Example 6:
on the basis of embodiment 5, the remote intelligent vision testing method generates a corresponding testing plan based on the initial testing range and the corresponding testing category, and includes:
determining a corresponding photometric range and a measurement distance based on the corresponding detection category; generating a corresponding detection word sequence table based on the initial detection range; generating a corresponding first detection plan based on the luminosity range and the measured distance and the detection word sequence list; generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
receiving a confirmation feedback instruction input by the user side; if the confirmation feedback instruction is confirmation detection, the first detection plan is used as a corresponding detection plan; and if the confirmation feedback instruction is the adjustment detection range or the adjustment detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until the feedback confirmation instruction input by the user side is received.
In this embodiment, the corresponding photometric range and the measurement distance are determined based on the corresponding detection type, the photometric range corresponding to the corresponding detection type is determined based on a preset relationship (specifically determined according to the detection standard) of the detection type and the photometric range, the measurement distance corresponding to accumulation in the corresponding detection is determined based on a preset relationship (specifically determined according to the detection standard) of the detection type and the measurement distance,
in this embodiment, the detection word sequence table is a list of detection word detection sequences generated based on the detection words included in the initial detection range and a preset detection word sequence (generally from top to bottom and from left to right).
In this embodiment, the first detection plan is a detection plan generated based on the photometric range and the measurement distance and the detection word order list.
In this embodiment, the first fee confirmation list is a list including fees for executing the first detection plan.
In this embodiment, the confirmation feedback instruction is an instruction for indicating that the user has confirmed the first fee confirmation list, and includes: confirming detection, adjusting detection range and adjusting detection type.
In this embodiment, the second detection plan is a detection plan generated by adjusting the first detection plan based on the confirmation feedback instruction.
In this embodiment, the second fee confirmation list is a list including fees for executing the second detection plan.
The beneficial effects of the technology are as follows: the first detection plan is generated based on the initial detection range, and then the first expense confirmation list is generated, so that the total expense of the detection item can be confirmed to the user side before the detection is executed, the expense transparency and the detection item execution process transparency are realized, the vision detection experience of the user is improved, and the vision detection process is more humanized.
Example 7:
based on embodiment 6, the remote intelligent vision testing method, S4: executing the detection plan, and simultaneously, receiving a direction feedback instruction input by the user terminal, referring to fig. 3, including:
s401: generating a corresponding voice prompt list based on the detection word sequence list and the feedback receiving time;
s402: and broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side.
In this embodiment, the feedback receiving time is the time corresponding to each detection word for receiving the feedback instruction selected by the user.
In this embodiment, the voice prompt list is an execution list including prompt voices generated based on the sequence list of the detection words and the interval time corresponding to each detection word prompt voice.
In this embodiment, the prompt voice is a voice for prompting the user to input a selection feedback instruction.
In this embodiment, the direction feedback instruction is an instruction for detecting the direction of the word, which is input by the user and seen by the user.
The beneficial effects of the technology are as follows: and generating a voice prompt list based on the detection plan, indicating a user to execute a detection process based on the voice prompt list, receiving a direction feedback instruction input by the user terminal, and converting the vision detection process into a network communication mode, so that automation of the vision detection process is realized, dependence on professionals is reduced, and the efficiency and accuracy of vision detection are improved.
Example 8:
based on embodiment 7, the remote intelligent vision testing method, S5: and adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, wherein the method comprises the following steps of:
analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the corresponding detection word direction, if so, continuing broadcasting the prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, re-broadcasting the latest broadcasted prompt voice and receiving a secondary direction feedback instruction;
Judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the corresponding detection word direction, if so, continuing broadcasting prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection word corresponding to the latest broadcasted prompt voice.
In this embodiment, the direction feedback result is the direction of the detected word seen by the user.
In this embodiment, the second direction feedback instruction is an instruction indicating the direction of the detected word seen by the user and received second input by the user when the direction feedback result corresponding to the direction feedback instruction input by the user for the first time is inconsistent with the direction of the corresponding detected word.
In this embodiment, the secondary direction feedback result is the direction of the detected word that the received user second input indicates the user sees when the direction feedback result corresponding to the direction feedback instruction input by the user first time is inconsistent with the direction of the corresponding detected word.
The beneficial effects of the technology are as follows: the detection plan is adjusted in real time based on the direction feedback instruction until an initial detection result is obtained, the detection plan can be adjusted in real time based on the direction feedback instruction input by a user, flexible adjustment of the vision detection process is realized, and the problems of excessive redundancy, long time consumption and low efficiency of the vision detection process caused by excessively dead plates and inflexibility of the detection process which is not generated artificially are avoided.
Example 9:
based on the embodiment 1, the remote intelligent vision testing method, S6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
storing the initial detection result and the current detection time into historical detection data corresponding to the corresponding detection type in the vision detection service platform;
based on the fee confirmation list finally sent to the user side, generating a corresponding fee paying instruction and sending the fee paying instruction to the user side;
all the history detection data of the user side are called, the history detection data are stored in a structuring mode according to detection types, and an omnibearing detection list corresponding to the user side is generated;
analyzing the historical detection data to determine the risk coefficient of the corresponding detection type;
and screening out corresponding recommended items from an item library based on the detection types and the corresponding risk coefficients, generating a corresponding intelligent recommended list based on all the recommended items, and sending the intelligent recommended list to the user side.
In this embodiment, the omnibearing detection list is a list including all detection data corresponding to all detection types generated by performing structural adjustment based on all historical detection data of the current user.
In this embodiment, analyzing the historical detection data determines a risk factor for the corresponding detection category, including:
in the method, in the process of the invention,for the dangerous coefficient of the current detection category, j is the j-th historical detection data in the current detection category, m is the total number of the historical detection data in the current detection category, and x m For the mth historical detection data in the current detection category, x j For the j-th historical detection data in the current detection category, x m-1 For the (m-1) th historical detection data in the current detection category, x j+1 For the (j+1) th historical detection data in the current detection category, x 01 For the first risk level coefficient, x 02 For the second risk level coefficient, x 03 For the third risk level coefficient, (x m -x j ) When 0, then->Taking 0, when (x j+1 -x j ) When 0, then->Taking 0;
the first risk level coefficient is a coefficient corresponding to the set first risk level, the second risk level coefficient is a coefficient corresponding to the set second risk level, and the third risk level coefficient is a coefficient corresponding to the set third risk level.
For example, the history detection data of the current detection category sequentially includes: 20. 50, 30, x 01 Is 20, x 02 Is 30, x 03 40, then12.42.
In this embodiment, selecting a corresponding recommended item from the item library based on the detected category and the corresponding risk coefficient includes:
And determining recommended items (blue light mirror, light scattering mirror and the like) corresponding to each detection category of the current user based on the dangerous coefficient corresponding to each detection category and a recommended item list (including all applicable items corresponding to the dangerous coefficient corresponding to each detection category) stored in the item library.
In this embodiment, the intelligent recommendation list is a list including all recommended items corresponding to the current user.
The beneficial effects of the technology are as follows: updating the initial detection result to the vision detection service platform, generating a corresponding payment instruction and an intelligent recommendation list, realizing automatic storage of the latest vision detection data of the user, generating the corresponding payment instruction and the intelligent recommendation list, and realizing the intellectualization, flow and humanization of the vision detection process.
Example 10:
a remote intelligent vision testing system, referring to fig. 4, comprising:
the input module is used for the user side to acquire the login permission of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data from the vision detection service platform based on the user type of the current user;
the generation module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
The execution module executes the detection plan and receives a direction feedback instruction input by the user terminal;
the adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
and the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list.
The beneficial effects of the technology are as follows: according to the invention, through setting the input module, the retrieval module, the generation module, the execution module, the adjustment module and the output module, the user can input the operation instruction through the network login vision detection service platform, and generates the personalized detection plan based on the input operation instruction and the user's historical detection data stored by the vision detection service platform, the user is prompted to input the direction feedback instruction through the voice prompt, the language in the traditional vision detection process is converted into the network communication mode, the detection plan can be adjusted in real time according to the direction feedback instruction fed back by the user, the detection efficiency is greatly improved, the time consumption in the detection process is reduced, the accuracy of the vision detection process is also ensured, the vision detection result and the historical detection data are stored, updated and checked in the network cloud service platform, and the intellectualization of the detection process is realized.
Example 11:
a remote intelligent vision testing apparatus for implementing a remote intelligent vision testing method as set forth in any one of embodiments 1-9, with reference to fig. 5-8, comprising:
the display screen 2 is arranged on the front surface of the user side shell 1, and four sides of the display screen 2 are wrapped by the user side shell 1;
the internal structure of the client housing 1 may refer to an existing client housing (specifically, an existing client housing such as an existing vision inspection light box);
the surface of the user side shell 1 is also provided with a position detection device 3, a control key 4 and a voice interaction device 5, wherein the position detection device 3, the control key 4 and the voice interaction device 5 are all positioned below the display screen 2;
the position detecting device 3 is configured to detect position information between the user side housing 1 and a user (specifically, the position information between the user side housing 1 and the user includes a relative distance between the user side housing 1 and the user, a difference between a height of the user side housing 1 and a sight line of the user);
the user side casing 1 is provided with a position adjusting device, the position adjusting device set up in the back surface of the user side casing 1 for according to the position information between the user side casing 1 detected by the detection device 3 and the user, the position adjusting device includes:
The device comprises a fixed plate 6, a movable plate 7, a motor 8, an output shaft 9, a first transmission shaft 10, a first belt wheel 11, a first gear 12, a clutch 13, a first lead screw 14, a movable block 15, a fixed sleeve 16, a support plate 17, a first connecting rod 18, a second connecting rod 19, a chute 20, a second lead screw 21, a fixed block 22, a second transmission shaft 23, a second belt wheel 24, a synchronous belt 25, a second gear 26 and a third transmission shaft 27;
the motor 8 is fixed on the moving plate 7, the output end of the motor 8 is fixed with one end of the output shaft 9, the other end of the output shaft 9 is fixed with a first belt wheel 11, and the first belt wheel 11 is connected with a second belt wheel 24 through a synchronous belt 25;
one end of the first transmission shaft 10 is fixed with the first belt wheel 11, a first gear 12 is fixed at the other end of the first transmission shaft 10, the first gear 12 is meshed with a second gear 26, the second gear 26 is fixed at one end of a third transmission shaft 27, and the other end of the third transmission shaft 27 is connected with one end of a first lead screw 14 through a clutch 13;
the first lead screw 14 is sleeved with a moving block 15, the moving block 15 is in threaded connection with the first lead screw 14, the other end of the first lead screw 14 is connected with a supporting plate 17, the supporting plate 17 is fixed on the moving plate 7, the fixed sleeve 16 is rotatably sleeved at the other end of the first lead screw 14, and the fixed sleeve 16 is fixed with the supporting plate 17;
One end of the first connecting rod 18 is rotatably connected to the moving block 15, the other end of the first connecting rod 18 is slidably connected to the fixed plate 6, one end of the second connecting rod 19 is rotatably connected to the fixed sleeve 16, and the other end of the second connecting rod 19 is rotatably connected to the fixed plate 6;
the user side shell 1 is fixed with the fixed plate 6;
one end of the second transmission shaft 23 is fixed on a second belt wheel 24, the other end of the second transmission shaft 23 is connected with a second lead screw 21 through a clutch 13, and the other end of the second lead screw 21 is connected with a supporting plate 17;
the movable plate 7 is provided with a chute 20, the fixed block 22 penetrates through the chute 20, the fixed block 22 is sleeved on the second screw rod 21, and the fixed block 22 is in threaded connection with the second screw rod 21.
The working principle and the beneficial effects of the technology are as follows: the invention provides power for the first screw rod 14 and the second screw rod 21 through the motor 8, and the clutch 13 is arranged between the two to carry out power on-off, so that the rotation of the first screw rod 14 and the second screw rod 21 is not interfered with each other, the second screw rod 21 rotates on the fixed block 22 to drive the movable plate 7 to move up and down, the rotation of the first screw rod 14 changes the distance between the movable block 15 and the fixed sleeve 16, thereby affecting the distance between the fixed plate 6 and the movable plate 7, and finally the position of the user side shell 1 can be automatically adjusted according to the position of a user detected by the position detection device 3;
Through setting up control button 4, display screen 2, voice interaction device 5, realize logging in vision inspection service platform through the network and make the user input operation command to based on the historical detection data of user that input operation command and vision inspection service platform stored, produce individualized detection plan, detect the position information between user side casing 1 and the user through position detection device 3, and then change the relative position between user side casing 1 and the user through position adjustment device, so that the position between user side casing 1 and the user is the optimal position.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. A method for remote intelligent vision testing, comprising:
s1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data at the vision detection service platform based on the user type of the current user;
S3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list;
s1: the user side obtains the login authority of the vision testing service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user terminal, wherein the first user type comprises: medical care end, user end, parent end, second user type includes: history users, new users;
determining a corresponding login recommended mode based on the first user type feedback instruction, and displaying the login recommended mode to a login mode selection page;
receiving a login mode selection feedback instruction input by the user side;
determining a login mode corresponding to the user terminal based on the login mode selection feedback instruction;
Determining a corresponding verification mode based on the second user type feedback instruction;
generating a corresponding login page based on the verification mode and the login mode, and jumping the display page of the user side to the login page;
receiving primary login information input by the user side from the login page, verifying whether the primary login information is correct, and if yes, sending an authority acquisition success instruction to the user side;
otherwise, sending a permission acquisition failure instruction to the user terminal, and simultaneously, sending a user type secondary confirmation instruction to the user terminal;
determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction;
generating a secondary login page based on the secondary login mode and the secondary verification mode, and jumping the display page of the user side to the secondary login page;
receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct, and if yes, sending a right acquisition success instruction to the user side;
otherwise, sending a failure instruction of permission secondary acquisition to the user terminal, and simultaneously, jumping a display page of the user terminal to a new user registration page;
Receiving new user registration information input by the user terminal from the new user registration page, storing the new user registration information into a user information verification library, and sending a right acquisition success instruction to the user terminal;
when the user side authority is successfully obtained, an operation instruction input by the user side is obtained;
the operation instruction input by the user side is obtained, which comprises the following steps:
when the user side authority is successfully obtained or the new user is successfully registered, sending a detection type selection instruction and a detection range selection instruction to the user side;
receiving a detection type selection feedback instruction and a detection range selection feedback instruction which are input by the user terminal;
s2: searching corresponding historical detection data in the vision detection service platform based on the user type of the current user to obtain a search result, wherein the search result comprises the following steps:
s201: when the user side authority is successfully obtained and the second user type of the user side is a history user, obtaining login information corresponding to the user side;
s202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information in the vision detection service platform based on the search term chain;
S204: screening historical detection data of a corresponding detection type from the historical detection information based on the detection type feedback instruction;
wherein, the login information includes: primary login information and secondary login information;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including:
analyzing the history detection data to obtain corresponding detection time intervals and detection data fluctuation between adjacent history detection data;
based on each detection time interval and the corresponding detection data fluctuation, obtaining a fluctuation relation coefficient corresponding to each detection time interval;
obtaining a corresponding fluctuation amplitude relation coefficient fitting curve based on the fluctuation amplitude relation coefficient;
determining the latest time interval from the latest detection time to the current time in the historical detection data;
predicting a corresponding latest fluctuation range relation coefficient range based on the fluctuation range relation coefficient fitting curve and the latest time interval;
determining the latest detection data fluctuation range based on the latest fluctuation relation coefficient range and the latest time interval;
determining a first detection range based on the latest detection data fluctuation range and the latest detection data in the historical detection data;
Analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range;
judging whether the first detection range comprises the user selection detection range, if so, taking the first detection range as an initial detection range;
otherwise, taking the combined set of the first detection range and the user-selected detection range as an initial detection range;
generating a corresponding detection plan based on the initial detection range and the corresponding detection category;
based on the fluctuation range relation coefficient fitting curve and the latest time interval, a corresponding latest fluctuation range relation coefficient range is predicted, and the method comprises the following steps:
acquiring the slope of a fluctuation range relation coefficient fitting curve, and determining the latest standard fluctuation range relation coefficient based on the slope, the latest time interval and the latest data on the fluctuation range relation coefficient fitting curve, wherein the latest fluctuation range relation coefficient range is [ y-a, y+a ], a is the fluctuation range, and y is the latest data on the fluctuation range relation coefficient fitting curve;
the calculation formula of the fluctuation range is as follows:
wherein a is the fluctuation range; y is the latest data on the expansion relation coefficient fitting curve, y m Is the latest standard fluctuation relation coefficient;
s6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
Storing the initial detection result and the current detection time into historical detection data corresponding to the corresponding detection type in the vision detection service platform;
based on the fee confirmation list finally sent to the user side, generating a corresponding fee paying instruction and sending the fee paying instruction to the user side;
all the history detection data of the user side are called, the history detection data are stored in a structuring mode according to detection types, and an omnibearing detection list corresponding to the user side is generated;
analyzing the historical detection data to determine the risk coefficient of the corresponding detection type;
screening out corresponding recommended items from an item library based on the detection types and the corresponding risk coefficients, generating a corresponding intelligent recommended list based on all the recommended items, and sending the intelligent recommended list to the user side;
analyzing the historical detection data to determine a risk coefficient of a corresponding detection category includes:
in the method, in the process of the invention,for the dangerous coefficient of the current detection category, j is the j-th historical detection data in the current detection category, m is the total number of the historical detection data in the current detection category, and x m For the mth historical detection data in the current detection category, x j For the j-th historical detection data in the current detection category, x m-1 For the (m-1) th historical detection data in the current detection category, x j+1 For the (j+1) th historical detection data in the current detection category, x 01 For the first risk level coefficient, x 02 For the second risk level coefficient, x 03 For the third risk level coefficient, (x m -x j ) When 0, then->Taking 0, when (x j+1 -x j ) When 0, then->Taking 0;
the first risk level coefficient is a coefficient corresponding to the set first risk level, the second risk level coefficient is a coefficient corresponding to the set second risk level, and the third risk level coefficient is a coefficient corresponding to the set third risk level.
2. A method of remote intelligent vision testing as set forth in claim 1, wherein generating a corresponding test plan based on the initial test range and a corresponding test category comprises:
determining a corresponding photometric range and a measurement distance based on the corresponding detection category;
generating a corresponding detection word sequence table based on the initial detection range;
generating a corresponding first detection plan based on the luminosity range and the measured distance and the detection word sequence list;
generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
Receiving a confirmation feedback instruction input by the user side;
if the confirmation feedback instruction is confirmation detection, the first detection plan is used as a corresponding detection plan;
and if the confirmation feedback instruction is the adjustment detection range or the adjustment detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until the feedback confirmation instruction input by the user side is received.
3. A method of remote intelligent vision testing according to claim 2, wherein S4: executing the detection plan, and simultaneously receiving a direction feedback instruction input by the user terminal, wherein the method comprises the following steps:
s401: generating a corresponding voice prompt list based on the detection word sequence list and the feedback receiving time;
s402: broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side;
s5: and adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, wherein the method comprises the following steps of:
Analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the corresponding detection word direction, if so, continuing broadcasting the prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, re-broadcasting the latest broadcasted prompt voice and receiving a secondary direction feedback instruction;
judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the corresponding detection word direction, if so, continuing broadcasting prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection word corresponding to the latest broadcasted prompt voice.
4. A remote intelligent vision testing system, comprising:
the input module is used for the user side to acquire the login permission of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data from the vision detection service platform based on the user type of the current user;
the generation module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
The execution module executes the detection plan and receives a direction feedback instruction input by the user terminal;
the adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list;
the user side obtains the login authority of the vision testing service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user terminal, wherein the first user type comprises: medical care end, user end, parent end, second user type includes: history users, new users;
determining a corresponding login recommended mode based on the first user type feedback instruction, and displaying the login recommended mode to a login mode selection page;
receiving a login mode selection feedback instruction input by the user side;
determining a login mode corresponding to the user terminal based on the login mode selection feedback instruction;
determining a corresponding verification mode based on the second user type feedback instruction;
Generating a corresponding login page based on the verification mode and the login mode, and jumping the display page of the user side to the login page;
receiving primary login information input by the user side from the login page, verifying whether the primary login information is correct, and if yes, sending an authority acquisition success instruction to the user side;
otherwise, sending a permission acquisition failure instruction to the user terminal, and simultaneously, sending a user type secondary confirmation instruction to the user terminal;
determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction;
generating a secondary login page based on the secondary login mode and the secondary verification mode, and jumping the display page of the user side to the secondary login page;
receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct, and if yes, sending a right acquisition success instruction to the user side;
otherwise, sending a failure instruction of permission secondary acquisition to the user terminal, and simultaneously, jumping a display page of the user terminal to a new user registration page;
receiving new user registration information input by the user terminal from the new user registration page, storing the new user registration information into a user information verification library, and sending a right acquisition success instruction to the user terminal;
When the user side authority is successfully obtained, an operation instruction input by the user side is obtained;
the operation instruction input by the user side is obtained, which comprises the following steps:
when the user side authority is successfully obtained or the new user is successfully registered, sending a detection type selection instruction and a detection range selection instruction to the user side;
receiving a detection type selection feedback instruction and a detection range selection feedback instruction which are input by the user terminal;
searching corresponding historical detection data in the vision detection service platform based on the user type of the current user to obtain a search result, wherein the search result comprises the following steps:
when the user side authority is successfully obtained and the second user type of the user side is a history user, obtaining login information corresponding to the user side;
determining final login information in the login information;
determining a corresponding search term chain in the final login information, and searching corresponding historical detection information in the vision detection service platform based on the search term chain;
screening historical detection data of a corresponding detection type from the historical detection information based on the detection type feedback instruction;
wherein, the login information includes: primary login information and secondary login information;
Generating a corresponding detection plan based on the historical detection data and the operation instruction, including:
analyzing the history detection data to obtain corresponding detection time intervals and detection data fluctuation between adjacent history detection data;
based on each detection time interval and the corresponding detection data fluctuation, obtaining a fluctuation relation coefficient corresponding to each detection time interval;
obtaining a corresponding fluctuation amplitude relation coefficient fitting curve based on the fluctuation amplitude relation coefficient;
determining the latest time interval from the latest detection time to the current time in the historical detection data;
predicting a corresponding latest fluctuation range relation coefficient range based on the fluctuation range relation coefficient fitting curve and the latest time interval;
determining the latest detection data fluctuation range based on the latest fluctuation relation coefficient range and the latest time interval;
determining a first detection range based on the latest detection data fluctuation range and the latest detection data in the historical detection data;
analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range;
judging whether the first detection range comprises the user selection detection range, if so, taking the first detection range as an initial detection range;
Otherwise, taking the combined set of the first detection range and the user-selected detection range as an initial detection range;
generating a corresponding detection plan based on the initial detection range and the corresponding detection category;
based on the fluctuation range relation coefficient fitting curve and the latest time interval, a corresponding latest fluctuation range relation coefficient range is predicted, and the method comprises the following steps:
acquiring the slope of a fluctuation range relation coefficient fitting curve, and determining the latest standard fluctuation range relation coefficient based on the slope, the latest time interval and the latest data on the fluctuation range relation coefficient fitting curve, wherein the latest fluctuation range relation coefficient range is [ y-a, y+a ], a is the fluctuation range, and y is the latest data on the fluctuation range relation coefficient fitting curve;
the calculation formula of the fluctuation range is as follows:
wherein a is the fluctuation range; y is the latest data on the expansion relation coefficient fitting curve, y m Is the latest standard fluctuation relation coefficient;
updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
storing the initial detection result and the current detection time into historical detection data corresponding to the corresponding detection type in the vision detection service platform;
Based on the fee confirmation list finally sent to the user side, generating a corresponding fee paying instruction and sending the fee paying instruction to the user side;
all the history detection data of the user side are called, the history detection data are stored in a structuring mode according to detection types, and an omnibearing detection list corresponding to the user side is generated;
analyzing the historical detection data to determine the risk coefficient of the corresponding detection type;
screening out corresponding recommended items from an item library based on the detection types and the corresponding risk coefficients, generating a corresponding intelligent recommended list based on all the recommended items, and sending the intelligent recommended list to the user side;
analyzing the historical detection data to determine a risk coefficient of a corresponding detection category includes:
in the method, in the process of the invention,for the dangerous coefficient of the current detection category, j is the j-th historical detection data in the current detection category, m is the total number of the historical detection data in the current detection category, and x m For the mth historical detection data in the current detection category, x j For the j-th historical detection data in the current detection category, x m-1 For the (m-1) th historical detection data in the current detection category, x j+1 For the (j+1) th historical detection data in the current detection category, x 01 For the first risk level coefficient, x 02 For the second risk level coefficient, x 03 For the third risk level coefficient, (x m -x j ) When 0, then->Taking 0, when (x j+1 -x j ) When 0, then->Taking 0;
the first risk level coefficient is a coefficient corresponding to the set first risk level, the second risk level coefficient is a coefficient corresponding to the set second risk level, and the third risk level coefficient is a coefficient corresponding to the set third risk level.
5. A remote intelligent vision testing apparatus for implementing a remote intelligent vision testing method as set forth in any one of claims 1-3, wherein the remote intelligent vision testing apparatus comprises:
the mobile terminal comprises a user side shell (1) and a display screen (2), wherein the display screen (2) is arranged on the front surface of the user side shell (1), and four sides of the display screen (2) are wrapped by the user side shell (1);
the surface of the user side shell (1) is also provided with a position detection device (3), a control key (4) and a voice interaction device (5), and the position detection device (3), the control key (4) and the voice interaction device (5) are all positioned below the display screen (2);
the position detection device (3) is used for detecting position information between the user side shell (1) and a user;
the utility model provides a position control device is provided with on user side casing (1), position control device set up in user side casing (1) rear surface for according to the position information between user side casing (1) and the user that detection device (3) detected, position control device includes:
The device comprises a fixed plate (6), a movable plate (7), a motor (8), an output shaft (9), a first transmission shaft (10), a first belt wheel (11), a first gear (12), a clutch (13), a first lead screw (14), a movable block (15), a fixed sleeve (16), a supporting plate (17), a first connecting rod (18), a second connecting rod (19), a chute (20), a second lead screw (21), a fixed block (22), a second transmission shaft (23), a second belt wheel (24), a synchronous belt (25), a second gear (26) and a third transmission shaft (27);
the motor (8) is fixed on the moving plate (7), the output end of the motor (8) is fixed with one end of the output shaft (9), a first belt wheel (11) is fixed at the other end of the output shaft (9), and the first belt wheel (11) is connected with a second belt wheel (24) through a synchronous belt (25);
one end of the first transmission shaft (10) is fixed with the first belt wheel (11), a first gear (12) is fixed at the other end of the first transmission shaft (10), the first gear (12) is meshed with a second gear (26), the second gear (26) is fixed at one end of a third transmission shaft (27), and the other end of the third transmission shaft (27) is connected with one end of a first screw rod (14) through a clutch (13);
the movable block (15) is sleeved on the first screw rod (14), the movable block (15) is in threaded connection with the first screw rod (14), the other end of the first screw rod (14) is connected with the supporting plate (17), the supporting plate (17) is fixed on the movable plate (7), the fixed sleeve (16) is rotatably sleeved on the other end of the first screw rod (14), and the fixed sleeve (16) is fixed with the supporting plate (17);
One end of the first connecting rod (18) is rotationally connected with the moving block (15), the other end of the first connecting rod (18) is slidingly connected with the fixed plate (6), one end of the second connecting rod (19) is rotationally connected with the fixed sleeve (16), and the other end of the second connecting rod (19) is rotationally connected with the fixed plate (6);
the user side shell (1) is fixed with the fixed plate (6);
one end of the second transmission shaft (23) is fixed on a second belt wheel (24), the other end of the second transmission shaft (23) is connected with a second lead screw (21) through a clutch (13), and the other end of the second lead screw (21) is connected with a supporting plate (17);
the movable plate (7) is provided with a sliding groove (20), the fixed block (22) penetrates through the sliding groove (20), the fixed block (22) is sleeved on the second screw rod (21), and the fixed block (22) is in threaded connection with the second screw rod (21).
CN202210657207.0A 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device Active CN115054198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210657207.0A CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210657207.0A CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Publications (2)

Publication Number Publication Date
CN115054198A CN115054198A (en) 2022-09-16
CN115054198B true CN115054198B (en) 2023-07-21

Family

ID=83199917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210657207.0A Active CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Country Status (1)

Country Link
CN (1) CN115054198B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402211B2 (en) * 2016-10-21 2019-09-03 Inno Stream Technology Co., Ltd. Method for processing innovation-creativity data information, user equipment and cloud server
CN108960166A (en) * 2018-07-11 2018-12-07 谢涛远 A kind of vision testing system, method, terminal and medium
CN109330555A (en) * 2018-09-14 2019-02-15 侯尧珍 A kind of intelligent eyesight detection based on cloud computing and training correction system
CN109893080A (en) * 2019-03-26 2019-06-18 张旭 A kind of intelligent interactive method of self-service measurement eyesight
CN110393503A (en) * 2019-07-18 2019-11-01 苏州国科康成医疗科技有限公司 Vision inspection system with cloud service function
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN112656363B (en) * 2020-12-17 2023-04-25 维视艾康特(广东)医疗科技股份有限公司 Vision detection system and vision detection method
CN112932402A (en) * 2021-02-07 2021-06-11 浙江工贸职业技术学院 Self-service vision screening system based on artificial intelligence and intelligent perception
CN114190880A (en) * 2021-12-09 2022-03-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic equipment and storage medium
CN114468973B (en) * 2022-01-21 2023-08-11 广州视域光学科技股份有限公司 Intelligent vision detection system

Also Published As

Publication number Publication date
CN115054198A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US6728662B2 (en) Method and system for remotely servicing a detection device
Emons et al. Reference materials: terminology and use. Can't one see the forest for the trees?
KR101647423B1 (en) System, server and method for diagnosing electric power equipments automatically
CN109451304B (en) Batch focusing test method and system for camera modules
CN206470275U (en) A kind of hand-held collaurum readout instrument
Bulgarelli et al. Evaluating the maximum likelihood method for detecting short-term variability of AGILE γ-ray sources
CN104011505A (en) Proactive user-based content correction and enrichment for geo data
CN114894254A (en) Dynamic metering method for carbon sink of single-plant wood
CN115054198B (en) Remote intelligent vision detection method, system and device
WO2017141225A2 (en) Method for diagnosing/managing new renewable energy facility using mobile terminal and system therefor
CN107741575A (en) Beacon light character intelligent Detection and detection method
CN115860280B (en) Shale gas yield prediction method, device, equipment and storage medium
CN115129810A (en) Service life evaluation system based on equipment fault detection
JP2011075468A (en) Specimen inspection apparatus
CN117332240B (en) Rock burst prediction model construction method, storage medium, rock burst prediction method and system
CN110333361A (en) A kind of full-automatic spectrum sampling modeling and method
CN116399305B (en) ADCP (automatic dependent control protocol) current measurement result on-site warehouse entry and self-correction method based on cloud platform
CN109325556A (en) A kind of construction information management method based on planar bar code technology
Smith et al. Potential Roles for Unattended Safeguards Instrumentation at Centrifuge Enrichment Plants.
US11227174B1 (en) License plate recognition
CN112883271A (en) Course education system based on analytic hierarchy process evaluation and course recommendation method
KR20180093594A (en) Method and portable spectrometer for measuring moisture content of soil by using light reflectance
Moscardini et al. Constraining cosmological parameters with the clustering properties of galaxy clusters in optical and X-ray bands
CN115660913A (en) System and method for customizing learning content for user
CN112923967B (en) Instrument calibration system supporting rapid assessment of instrument calibration uncertainty

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant