CN115054198A - Remote intelligent vision detection method, system and device - Google Patents

Remote intelligent vision detection method, system and device Download PDF

Info

Publication number
CN115054198A
CN115054198A CN202210657207.0A CN202210657207A CN115054198A CN 115054198 A CN115054198 A CN 115054198A CN 202210657207 A CN202210657207 A CN 202210657207A CN 115054198 A CN115054198 A CN 115054198A
Authority
CN
China
Prior art keywords
detection
user
user side
instruction
login
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210657207.0A
Other languages
Chinese (zh)
Other versions
CN115054198B (en
Inventor
伍卫东
项道满
孟晶
刘小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Vision Optical Technology Co ltd
Original Assignee
Guangzhou Vision Optical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Vision Optical Technology Co ltd filed Critical Guangzhou Vision Optical Technology Co ltd
Priority to CN202210657207.0A priority Critical patent/CN115054198B/en
Publication of CN115054198A publication Critical patent/CN115054198A/en
Application granted granted Critical
Publication of CN115054198B publication Critical patent/CN115054198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a remote intelligent vision detection method, a system and a device, wherein the method comprises the following steps: s1: the user side acquires the login authority of the vision detection service platform and inputs an operation instruction; s2: retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user; s3: generating a corresponding detection plan based on the historical detection data and the operation instruction; s4: executing the detection plan, and receiving a direction feedback instruction input by the user side; s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained; s6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list; the personalized detection plan is generated through network communication, the storage, the updating and the checking of the detection result are realized on the cloud service platform, the network intellectualization of the detection process is realized, and the detection efficiency and the accuracy are improved.

Description

Remote intelligent vision detection method, system and device
Technical Field
The invention relates to the technical field of intelligent detection, in particular to a remote intelligent vision detection method, system and device.
Background
At present, in the traditional vision test, a professional is required to consult the test history of a user, then a rough test range is deduced according to professional experience, the professional adopts the deduced rough test range to carry out manual indication test, and the memory deviation or other factors of the user may cause incomplete test history or great errors, so that a more accurate personalized test plan cannot be generated, the whole process of the vision test has low efficiency and long time consumption, and the accuracy of the vision test result is not high enough by the manual indication test mode; the traditional vision detection service does not have one-to-one data storage service for users, and is not beneficial to storage, updating and viewing of detection data of the users.
Therefore, the invention provides a remote intelligent vision detection method, system and device.
Disclosure of Invention
The invention provides a remote intelligent vision detection method, a system and a device, which are used for generating an individualized detection plan through network communication, realizing storage, updating and checking of detection results on a cloud service platform, realizing intellectualization of a detection process and improving detection efficiency and accuracy.
The invention provides a remote intelligent vision detection method, which comprises the following steps:
s1: a user side acquires the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data in the vision detection service platform based on the user type of the current user;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list.
Preferably, in the remote intelligent vision testing method, the step of S1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user side, wherein the first user type comprises: medical care end, user end, head of a family end, the second user type includes: historical users, new users;
determining a corresponding login recommendation mode based on the first user feedback instruction, and displaying the login recommendation mode to a login mode selection page;
receiving a login mode selection feedback instruction input by the user side;
determining a login mode corresponding to the user side based on the login mode selection feedback instruction;
determining a corresponding verification mode based on the second user type feedback instruction;
generating a corresponding login page based on the verification mode and the login mode, and jumping a display page of the user side to the login page;
receiving initial login information input by the user side from the login page, verifying whether the initial login information is correct or not, and if so, sending an authority acquisition success instruction to the user side;
otherwise, sending an authority acquisition failure instruction to the user side, and simultaneously sending a user type secondary confirmation instruction to the user side;
determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction;
generating a secondary login page based on the secondary login mode and the secondary verification mode, and skipping a display page of the user side to the secondary login page;
receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct or not, and if so, sending an authority acquisition success instruction to the user side;
otherwise, sending a permission secondary acquisition failure instruction to the user side, and simultaneously skipping a display page of the user side to a new user registration page;
receiving new user registration information input by the user side from the new user registration page, storing the new user registration information into a user information verification library, and sending an authority acquisition success instruction to the user side;
and when the user side authority is successfully acquired, acquiring an operation instruction input by the user side.
Preferably, the method for remotely and intelligently detecting eyesight acquires the operation instruction input by the user side, and includes:
when the user side authority is successfully acquired or a new user is successfully registered, a detection type selection instruction and a detection range selection instruction are sent to the user side;
and receiving a detection type selection feedback instruction and a detection range selection feedback instruction input by the user side.
Preferably, in the remote intelligent vision testing method, the step of S2: retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user to obtain a retrieval result, wherein the retrieval result comprises the following steps:
s201: when the user side authority is successfully acquired and the second user type of the user side is a historical user, acquiring login information corresponding to the user side;
s202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information on the basis of the search term chain on the vision detection service platform;
s204: screening out historical detection data of corresponding detection types from the historical detection information based on the detection type feedback instruction;
wherein the login information comprises: primary login information and secondary login information.
Preferably, in the remote intelligent vision testing method, the step of S3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including:
analyzing the historical detection data to obtain corresponding detection time intervals and detection data expansion amplitudes between adjacent historical detection data;
obtaining an amplitude relation coefficient corresponding to each detection time interval based on each detection time interval and the corresponding detection data amplitude;
obtaining a corresponding expansion relation coefficient fitting curve based on the expansion relation coefficient;
determining the latest time interval between the latest detection time in the historical detection data and the current time;
predicting a corresponding latest fluctuation relation coefficient range based on the fluctuation relation coefficient fitting curve and the latest time interval;
determining the amplitude range of the latest detection data based on the latest amplitude relationship coefficient range and the latest time interval;
determining a first detection range based on the latest detection data in the latest detection data amplitude range and the historical detection data;
analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range;
judging whether the first detection range contains the user-selected detection range, if so, taking the first detection range as an initial detection range;
otherwise, taking the collection of the first detection range and the user selection detection range as an initial detection range;
and generating a corresponding detection plan based on the initial detection range and the corresponding detection category.
Preferably, the method for remote intelligent vision testing generates a corresponding testing plan based on the initial testing range and the corresponding testing category, and includes:
determining a corresponding photometric range and measurement distance based on the corresponding detection category;
generating a corresponding detection word order list based on the initial detection range;
generating a corresponding first detection plan based on the luminosity range and the measured distance and the detection word sequence list;
generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
receiving a confirmation feedback instruction input by the user side;
if the confirmation feedback instruction is confirmation detection, taking the first detection plan as a corresponding detection plan;
if the confirmation feedback instruction is used for adjusting the detection range or adjusting the detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until receiving a feedback confirmation instruction input by the user side.
Preferably, in the remote intelligent vision testing method, the step of S4: executing the detection plan, and receiving a direction feedback instruction input by the user side, wherein the direction feedback instruction comprises the following steps:
s401: generating a corresponding voice prompt list based on the detected word order list and the feedback receiving time;
s402: and broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side.
Preferably, in the remote intelligent vision testing method, the step of S5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, including:
analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
if not, rebroadcasting the newly broadcasted prompt voice and receiving a secondary direction feedback instruction;
judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection words corresponding to the latest broadcasted prompt voice.
Preferably, in the remote intelligent vision testing method, the step of S6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the steps comprise:
storing the initial detection result and the current detection time into historical detection data corresponding to the detection types in the vision detection service platform;
generating a corresponding payment instruction based on the expense confirmation list finally sent to the user side, and sending the payment instruction to the user side;
calling all historical detection data of the user side, and performing structured storage on the historical detection data according to detection types to generate an all-directional detection list corresponding to the user side;
analyzing historical detection data to determine danger coefficients corresponding to detection types;
and screening out corresponding recommended items from an item library based on the detection types and the corresponding risk coefficients, generating a corresponding intelligent recommendation list based on all the recommended items, and sending the intelligent recommendation list to the user side.
Preferably, a remote intelligent vision testing system includes:
the input module is used for the user side to acquire the login authority of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user;
the generating module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
the execution module executes the detection plan and receives a direction feedback instruction input by the user side;
the adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
and the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list.
Preferably, a remote intelligent vision testing apparatus is used to implement the remote intelligent vision testing method, and the remote intelligent vision testing apparatus includes:
the display screen is arranged on the front surface of the user end shell, and four sides of the display screen are wrapped by the user end shell;
the surface of the user end shell is also provided with a position detection device, an operation key and a voice interaction device, and the position detection device, the operation key and the voice interaction device are all positioned below the display screen;
the position detection device is used for detecting position information between the user end shell and a user;
be provided with position control device on the user terminal housing, position control device set up in user terminal housing rear surface for adjust the position between user terminal housing and the user according to the position information between user terminal housing and the user that detection device detected, position control device includes:
the device comprises a fixed plate, a movable plate, a motor, an output shaft, a first transmission shaft, a first belt wheel, a first gear, a clutch, a first lead screw, a movable block, a fixed sleeve, a supporting plate, a first connecting rod, a second connecting rod, a sliding chute, a second lead screw, a fixed block, a second transmission shaft, a second belt wheel, a synchronous belt, a second gear and a third transmission shaft;
the motor is fixed on the movable plate, the output end of the motor is fixed with one end of an output shaft, the other end of the output shaft is fixed with a first belt wheel, and the first belt wheel is connected with a second belt wheel through a synchronous belt;
one end of the first transmission shaft is fixed with a first belt pulley, the other end of the first transmission shaft is fixed with a first gear, the first gear is meshed with a second gear, the second gear is fixed at one end of a third transmission shaft, and the other end of the third transmission shaft is connected with one end of a first lead screw through a clutch;
the first lead screw is sleeved with a moving block, the moving block is in threaded connection with the first lead screw, the other end of the first lead screw is connected to a supporting plate, the supporting plate is fixed on the moving plate, the fixed sleeve is rotatably sleeved at the other end of the first lead screw, and the fixed sleeve is fixed with the supporting plate;
one end of the first connecting rod is rotatably connected to the moving block, the other end of the first connecting rod is slidably connected to the fixed plate, one end of the second connecting rod is rotatably connected to the fixed sleeve, and the other end of the second connecting rod is rotatably connected to the fixed plate;
the user side shell is fixed with the fixing plate;
one end of the second transmission shaft is fixed on the second belt wheel, the other end of the second transmission shaft is connected with a second lead screw through a clutch, and the other end of the second lead screw is connected with the supporting plate;
the movable plate is provided with a sliding groove, the fixed block penetrates through the sliding groove, the fixed block is connected to the second lead screw in a sleeved mode, and the fixed block is in threaded connection with the second lead screw.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a remote intelligent vision testing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another remote intelligent vision testing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for remote intelligent vision testing in accordance with another embodiment of the present invention;
FIG. 4 is a schematic diagram of a remote intelligent vision testing system according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a remote intelligent vision testing apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic front view of an adjusting device of the remote intelligent vision testing apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic side view of an adjusting device of the remote intelligent vision testing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic side view of an adjusting device of a remote intelligent vision testing device according to another embodiment of the present invention.
In the figure: 1. a user side housing; 2. a display screen; 3. a detection device; 4. operating a key; 5. a voice interaction device; 6. a fixing plate; 7. moving the plate; 8. a motor; 9. an output shaft; 10. a first drive shaft; 11. a first pulley; 12. a first gear; 13. a clutch; 14. a first lead screw; 15. a moving block; 16. fixing the sleeve; 17. a support plate; 18. a first link; 19. a second link; 20. a chute; 21. a second lead screw; 22. a fixed block; 23. a second drive shaft; 24. a second pulley; 25. a synchronous belt; 26. a second gear; 27. and a third transmission shaft.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides a remote intelligent vision detection method, which refers to a picture 1 and comprises the following steps:
s1: the user side acquires the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data in the vision detection service platform based on the user type of the current user;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list.
In this embodiment, the vision testing service platform is a platform for providing testing service to the user terminal, and is also a platform for storing user historical testing data.
In this embodiment, the login right is a right for the user to successfully log in the vision testing service platform.
In this embodiment, the operation instruction includes: and the detection type selection feedback instruction and the detection range selection feedback instruction are used for determining the vision detection type and the detection range of the current user.
In this embodiment, the user types include: historical users, new users.
In this embodiment, the historical detection data is the current user's vision detection historical data.
In this embodiment, the detection plan is a personalized detection plan generated based on the historical detection data and the operation instruction input by the user side, and includes a detection word detection sequence and an indication voice corresponding to each detection word included in the detection plan.
In this embodiment, the direction feedback instruction is a detection word direction received from the user side after each broadcast instruction voice input by the user side.
In this embodiment, the initial detection result is a vision detection result generated based on a direction feedback instruction input by the user and obtained after the detection plan is executed.
In this embodiment, the payment instruction is an instruction that includes a fee confirmation list corresponding to the detection item and is used to remind the user to pay a fee.
In this embodiment, the intelligent recommendation list is a list of recommended items retrieved from the item library based on analysis of the initial detection result and the historical detection data.
The beneficial effects of the above technology are: the invention enables a user to input an operation instruction by logging in the vision detection service platform through a network, generates a personalized detection plan based on the input operation instruction and historical detection data of the user stored by the vision detection service platform, prompts the user to input a direction feedback instruction by voice prompt, converts language description in the traditional vision detection process into a network communication mode, can adjust the detection plan in real time according to the direction feedback instruction fed back by the user, greatly improves the detection efficiency, reduces the time consumption in the detection process, ensures the accuracy of the vision detection process, realizes the storage, update and check of vision detection results and historical detection data in the network cloud service platform, and realizes the intellectualization of the detection process.
Example 2:
on the basis of the embodiment 1, the remote intelligent vision testing method, S1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user side, wherein the first user type comprises: medical care end, user end, head of a family end, the second user type includes: historical users, new users; determining a corresponding login recommendation mode based on the first user feedback instruction, and displaying the login recommendation mode to a login mode selection page; receiving a login mode selection feedback instruction input by the user side; determining a login mode corresponding to the user side based on the login mode selection feedback instruction; determining a corresponding verification mode based on the second user type feedback instruction; generating a corresponding login page based on the verification mode and the login mode, and jumping a display page of the user side to the login page; receiving initial login information input by the user side from the login page, verifying whether the initial login information is correct or not, and if so, sending an authority acquisition success instruction to the user side; otherwise, sending an authority acquisition failure instruction to the user side, and simultaneously sending a user type secondary confirmation instruction to the user side; determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction; generating a secondary login page based on the secondary login mode and the secondary verification mode, and skipping a display page of the user side to the secondary login page; receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct or not, and if so, sending an authority acquisition success instruction to the user side; otherwise, sending a permission secondary acquisition failure instruction to the user side, and simultaneously skipping a display page of the user side to a new user registration page; receiving new user registration information input by the user side from the new user registration page, storing the new user registration information into a user information verification library, and sending an authority acquisition success instruction to the user side; and when the user side authority is successfully acquired, acquiring an operation instruction input by the user side.
In this embodiment, the first user type feedback instruction is an instruction for feeding back the first user type of the current user.
In this embodiment, the second user type feedback instruction is an instruction for feeding back the second user type of the current user.
In this embodiment, the login recommendation mode is a login mode suitable for the current user determined based on the first user type of the current user, and examples of the login recommendation mode include: account password login, face recognition login, etc.
In this embodiment, the login mode selection page is a network page for displaying login mode options in the vision inspection service platform, and is also a network page for receiving a login mode selection feedback instruction input by a user.
In this embodiment, the login mode selection feedback command is a command indicating the login mode selected by the user.
In this embodiment, the login mode is a mode for the user to log in the vision testing service platform.
In this embodiment, the authentication mode is an authentication (security authentication) mode when the user logs in the vision testing service platform.
In this embodiment, the login page is a network page generated based on the login mode and the verification mode and used for the user to log in and enter the vision detection service platform.
In this embodiment, the initial login information is login information initially input by the user side on the login page.
In this embodiment, verifying whether the initial login information is correct includes: determining a login main keyword in the primary login information, searching whether user information corresponding to the login main keyword exists in a user information base, if so, judging whether other information except the login main keyword in the primary login information is consistent with the searched user information, and if so, judging that the primary login information is correct; otherwise, the initial login information is judged to be incorrect.
In this embodiment, the permission acquisition success instruction is an instruction for prompting that the current user has successfully logged in to the vision testing service platform.
In this embodiment, the permission acquisition failure instruction is an instruction for prompting that the current user does not successfully log in to the vision testing service platform.
In this embodiment, the user type secondary confirmation instruction is an instruction for performing secondary confirmation on the first user type and the second user type of the user.
In this embodiment, the secondary login mode is a second determined manner for the user to log in the vision testing service platform.
In this embodiment, the secondary authentication mode is an authentication (security authentication) mode when the second determined user terminal logs in the vision testing service platform.
In this embodiment, the secondary login page is a network page generated based on the secondary login mode and the secondary verification mode and used for the user to log in and enter the vision detection service platform.
In this embodiment, the second login information is login information input by the user terminal on the login page for the second time.
In this embodiment, verifying whether the secondary login information is correct includes: determining a login main keyword in secondary login information, searching whether user information corresponding to the login main keyword exists in a user information base, if so, judging whether other information except the login main keyword in the secondary login information is consistent with the searched user information, and if so, judging that the secondary login information is correct; otherwise, the secondary login information is judged to be incorrect.
In this embodiment, the new user registration page is a web page for registering a new user entering the vision testing service platform.
In this embodiment, the new user registration information is the new user information input by the user terminal and received on the new user registration page.
In this embodiment, the user information verification library is a database in the vision testing service platform for storing user information.
The beneficial effects of the above technology are: the user side enters the vision detection service platform by acquiring the login authority of the vision detection service platform, the information safety of the user is guaranteed, the user type of the user is confirmed, and a foundation is provided for subsequently retrieving historical detection data and generating an individualized detection plan.
Example 3:
on the basis of embodiment 2, the method for remotely and intelligently detecting eyesight, which obtains the operation instruction input by the user side, includes:
when the user side authority is successfully acquired or a new user is successfully registered, a detection type selection instruction and a detection range selection instruction are sent to the user side;
and receiving a detection type selection feedback instruction and a detection range selection feedback instruction input by the user side.
In this embodiment, the detection category selection instruction is used to prompt the current user to select the detection category.
In this embodiment, the detection range selection instruction is used to prompt the current user to select the detection range.
In this embodiment, the detection category selection feedback instruction is used to indicate an instruction of the detection category currently selected by the user.
In this embodiment, the detection range selection feedback instruction is an instruction indicating a detection range currently selected by the user.
The beneficial effects of the above technology are: by receiving the detection type selection feedback instruction and the detection range selection feedback instruction input by the user side, the type and the corresponding detection range which the user wants to detect can be known, so that the generated personalized detection plan takes the requirements of the user into consideration, and the generated personalized detection plan is more humanized and comprehensive.
Example 4:
on the basis of the embodiment 3, the remote intelligent vision testing method, S2: retrieving corresponding historical detection data in the vision detection service platform based on the user type of the current user to obtain a retrieval result, with reference to fig. 2, including:
s201: when the user side authority is successfully acquired and the second user type of the user side is a historical user, acquiring login information corresponding to the user side;
s202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information on the basis of the search term chain on the vision detection service platform;
s204: screening out historical detection data corresponding to the detection types from the historical detection information based on the detection type feedback instruction;
wherein the login information comprises: primary login information and secondary login information.
In this embodiment, the final login information is the login information last input by the user.
In this embodiment, the retrieval entry chain is a keyword used for retrieving from the vision testing service platform in the final login information, for example: name, identification number, etc.
In this embodiment, the historical detection information is related information corresponding to the vision test that the current user has performed, for example: the historical time of vision test, the historical type of vision test and the historical result of vision test.
In this embodiment, the historical detection data is the vision detection result of the corresponding detection type that the current user has performed.
The beneficial effects of the above technology are: the historical detection data corresponding to the detection types are retrieved from the vision detection service platform based on the login information of the user, the defect of the traditional mode that the historical detection data of the user is obtained by consulting or inquiring paper data is overcome, and a basis is provided for the follow-up generation of a personalized detection plan corresponding to the current user.
Example 5:
on the basis of the embodiment 3, the remote intelligent vision testing method, S3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including: analyzing the historical detection data to obtain corresponding detection time intervals and detection data expansion amplitudes between adjacent historical detection data; obtaining an amplitude relation coefficient corresponding to each detection time interval based on each detection time interval and the corresponding detection data amplitude; obtaining a corresponding expansion relation coefficient fitting curve based on the expansion relation coefficient; determining the latest time interval from the latest detection time in the historical detection data to the current time; predicting a corresponding latest range of the coefficient of the amplitude relationship based on the fitting curve of the coefficient of the amplitude relationship and the latest time interval; determining the amplitude range of the latest detection data based on the latest amplitude relationship coefficient range and the latest time interval; determining a first detection range based on the latest detection data in the latest detection data amplitude range and the historical detection data; analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range; judging whether the first detection range contains the user-selected detection range, if so, taking the first detection range as an initial detection range; otherwise, taking the collection of the first detection range and the user selection detection range as an initial detection range; and generating a corresponding detection plan based on the initial detection range and the corresponding detection category.
In this embodiment, the detection time interval is an interval between two corresponding detection times.
In this embodiment, the detected data fluctuation is the fluctuation between the corresponding detected data and the corresponding last detected data.
In this embodiment, the amplitude relationship coefficient is a ratio of the amplitude of the corresponding detection data to the corresponding detection time interval.
In this embodiment, the amplitude relationship coefficient fitting curve is a curve formed by fitting the amplitude relationship coefficient corresponding to each detection based on the corresponding detection type.
In this embodiment, the latest time interval is a time interval between the latest detection time in the historical detection data and the current time.
In this embodiment, predicting a corresponding latest range of the relationship coefficient of fluctuation based on the fitting curve of the relationship coefficient of fluctuation and the latest time interval includes:
acquiring the slope of the expansion relation coefficient fitting curve, and determining the latest standard expansion relation coefficient based on the slope, the latest time interval and the latest data on the expansion relation coefficient fitting curve, wherein the latest expansion relation coefficient range is [ y-a, y + a ], a is a fluctuation range and y is the latest data on the expansion relation coefficient fitting curve;
the fluctuation range is calculated by the formula:
Figure BDA0003688626100000131
wherein a is the fluctuation range; y is the relationship of amplitudeLatest data on number fitting curve, y m The coefficient is the latest standard amplitude relationship coefficient;
e.g. y m 0.9, y is 0.7, then a is 0.1.
In this embodiment, a first detection range is determined based on the latest detection data fluctuation range in the latest detection data and the historical detection data, that is, the latest detection data plus the latest detection data fluctuation range is the first detection range.
In this embodiment, the detection range selected by the user is a detection range of the corresponding detection type selected by the user, for example: myopia is detected at 200 to 400 degrees.
In this embodiment, the initial detection range is a detection range of a corresponding detection type generated based on the historical detection data and the operation instruction.
The beneficial effects of the above technology are: the method comprises the steps of obtaining a relation coefficient between the amplitude and the time interval of detection data of the types of gambling and winning detection by analyzing historical detection data, accurately predicting a first detection range based on the relation coefficient, wherein the predicted first detection range can contain the current possible vision detection result of a user, greatly reducing the detection range, and comprehensively considering the first detection range and the user selection detection range determined by an operation instruction to generate a corresponding detection plan, namely ensuring the simplification of the detection range and fully considering the requirements of the user.
Example 6:
on the basis of embodiment 5, the method for remote intelligent vision testing generates a corresponding testing plan based on the initial testing range and the corresponding testing category, and includes:
determining a corresponding photometric range and measurement distance based on the corresponding detection category; generating a corresponding detection word order list based on the initial detection range; generating a corresponding first detection plan based on the luminosity range and the measurement distance and the detection word sequence list; generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
receiving a confirmation feedback instruction input by the user side; if the confirmation feedback instruction is confirmation detection, taking the first detection plan as a corresponding detection plan; if the confirmation feedback instruction is used for adjusting the detection range or adjusting the detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until receiving a feedback confirmation instruction input by the user side.
In this embodiment, the corresponding luminosity range and the measurement distance are determined based on the corresponding detection category, the luminosity range corresponding to the corresponding detection category is determined based on the preset relationship (specifically determined according to the detection standard) between the detection category and the luminosity range, the measurement distance accumulated in the corresponding detection is determined based on the preset relationship (specifically determined according to the detection standard) between the detection category and the measurement distance,
in this embodiment, the detection word order list is a list of detection word detection orders generated based on the detection words included in the initial detection range and a preset detection word order (generally, from top to bottom, and from left to right).
In this embodiment, the first detection plan is a detection plan generated based on the photometric range and the measured distance, and the detection word sequence table.
In this embodiment, the first fee confirmation list is a list including fees for executing the first testing plan.
In this embodiment, the confirmation feedback instruction is an instruction for indicating that the user has confirmed the first expense confirmation list, and includes: checking, adjusting the detection range and adjusting the detection type.
In this embodiment, the second detection plan is a detection plan generated by adjusting the first detection plan based on the confirmation feedback instruction.
In this embodiment, the second fee confirmation list is a list including fees for executing the second testing plan.
The beneficial effects of the above technology are: the first detection plan is generated based on the initial detection range, the first expense confirmation list is further generated, the total expense of the detection items can be confirmed for the user side before detection is executed, expense transparence and detection item execution process transparence are achieved, the vision detection experience of the user is improved, and the vision detection process is more humanized.
Example 7:
on the basis of the embodiment 6, the remote intelligent vision testing method, S4: executing the detection plan, and receiving a direction feedback instruction input by the user side, with reference to fig. 3, including:
s401: generating a corresponding voice prompt list based on the detected word order list and the feedback receiving time;
s402: and broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side.
In this embodiment, the feedback receiving time is a time for receiving a feedback instruction selected by a user corresponding to each detection word.
In this embodiment, the voice prompt list is an execution list containing prompt voices, which is generated based on the detected word order list and the interval time corresponding to each detected word prompt voice.
In this embodiment, the prompt voice is a voice for prompting the user to input a selection feedback instruction.
In this embodiment, the direction feedback instruction is an instruction input by the user to detect the direction of the word viewed by the user.
The beneficial effects of the above technology are: and generating a voice prompt list based on the detection plan, instructing a user to execute a detection process based on the voice prompt list, receiving a direction feedback instruction input by the user side, and converting the vision detection process into a network communication mode, so that the automation of the vision detection process is realized, the dependence on professionals is reduced, and the efficiency and the accuracy of the vision detection are improved.
Example 8:
on the basis of the embodiment 7, the remote intelligent vision testing method, S5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, including:
analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
if not, re-broadcasting the newly broadcasted prompt voice and receiving a secondary direction feedback instruction;
judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection words corresponding to the newly broadcasted prompt voice.
In this embodiment, the direction feedback result is the direction of the detection word seen by the user.
In this embodiment, the secondary direction feedback instruction is an instruction which is received and is input by the user for the second time and represents the direction of the detection word seen by the user when the direction feedback result corresponding to the direction feedback instruction input by the user for the first time is not consistent with the direction of the corresponding detection word.
In this embodiment, the secondary direction feedback result is the direction of the detected word that is received and is input by the user for the second time when the direction feedback result corresponding to the direction feedback instruction input by the user for the first time is not consistent with the direction of the corresponding detected word.
The beneficial effects of the above technology are: the detection plan is adjusted in real time based on the direction feedback instruction until an initial detection result is obtained, the detection plan can be adjusted in real time based on the direction feedback instruction input by a user, flexible adjustment of the vision detection process is achieved, and the problems that the vision detection process is redundant, long in consumed time and low in efficiency due to the fact that the detection process which is not artificially generated is too rigid and inflexible are solved.
Example 9:
on the basis of the embodiment 1, the remote intelligent vision detection method, S6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
storing the initial detection result and the current detection time into historical detection data corresponding to the detection types in the vision detection service platform;
generating a corresponding payment instruction based on the expense confirmation list finally sent to the user side, and sending the payment instruction to the user side;
calling all historical detection data of the user side, and performing structured storage on the historical detection data according to detection types to generate an all-directional detection list corresponding to the user side;
analyzing historical detection data to determine danger coefficients corresponding to detection types;
and screening out corresponding recommended items from an item library based on the detection types and the corresponding danger coefficients, generating a corresponding intelligent recommendation list based on all the recommended items, and sending the intelligent recommendation list to the user side.
In this embodiment, the omni-directional detection list is a list that is generated by performing structural adjustment based on all historical detection data of the current user and includes all detection data corresponding to all detection types.
In this embodiment, analyzing the historical detection data to determine the risk coefficient of the corresponding detection type includes:
Figure BDA0003688626100000171
in the formula (I), the compound is shown in the specification,
Figure BDA0003688626100000172
the risk coefficient of the current detection type, j is the jth historical detection data in the current detection type, m is the total number of the historical detection data in the current detection type, x m For the mth historical inspection data, x, in the current inspection category j For the j-th in the current detected categoryIndividual historical inspection data, x m-1 For the (m-1) th historical inspection data, x, in the current inspection category j+1 For the (j +1) th historical detection data, x, in the current detection category 01 Is the first risk class coefficient, x 02 Is the second risk level coefficient, x 03 Is the third risk level coefficient, (x) m -x j ) When it is 0, then
Figure BDA0003688626100000173
Take 0, when (x) j+1 -x j ) When it is 0, then
Figure BDA0003688626100000174
Taking 0;
the first danger level coefficient is a coefficient corresponding to a set first danger level, the second danger level coefficient is a coefficient corresponding to a set second danger level, and the third danger level coefficient is a coefficient corresponding to a set third danger level.
For example, the historical detection data of the current detection category sequentially includes: 20. 50, 30, x 01 Is 20, x 02 Is 30, x 03 Is 40, then
Figure BDA0003688626100000175
Is 12.42.
In this embodiment, screening out a corresponding recommended item from an item library based on the detection type and the corresponding risk coefficient includes:
based on the risk coefficient corresponding to each detection type and the recommended item list (including all applicable items corresponding to the risk coefficient corresponding to each detection type) stored in the item library, recommended items (such as a blue mirror and an astigmatic mirror) corresponding to each detection type of the current user are determined.
In this embodiment, the intelligent recommendation list is a list including all recommended items corresponding to the current user.
The beneficial effects of the above technology are: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, so that the latest vision detection data of the user can be automatically stored, the corresponding payment instruction and the intelligent recommendation list can be generated, and the intellectualization, the flow and the humanization of the vision detection process can be realized.
Example 10:
a remote intelligent vision detection system, referring to fig. 4, comprising:
the input module is used for the user side to acquire the login authority of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user;
the generating module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
the execution module executes the detection plan and receives a direction feedback instruction input by the user side;
the adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
and the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list.
The beneficial effects of the above technology are: the invention realizes that the user inputs the operation instruction by logging in the vision detection service platform through the network by arranging the input module, the retrieval module, the generation module, the execution module, the adjustment module and the output module, and generates a personalized detection plan based on the input operation instruction and the historical detection data of the user stored by the vision detection service platform, the voice prompt prompts the user to input a direction feedback instruction, the language instruction in the traditional vision detection process is converted into a network communication mode, can adjust the detection plan in real time according to the direction feedback instruction fed back by the user, greatly improves the detection efficiency, reduces the time consumption in the detection process, ensures the accuracy of the vision detection process, and the vision detection result and the historical detection data are stored, updated and checked on the network cloud service platform, and the intellectualization of the detection process is realized.
Example 11:
a remote intelligent vision test apparatus for implementing a remote intelligent vision test method as described in any one of embodiments 1-9, with reference to fig. 5-8, comprising:
the display screen 2 is arranged on the front surface of the user side shell 1, and four sides of the display screen 2 are wrapped by the user side shell 1;
the internal structure of the user terminal shell 1 can refer to the existing user terminal shell (specifically, the existing user terminal shell is like the existing visual detection lamp box);
the surface of the user side shell 1 is also provided with a position detection device 3, an operation key 4 and a voice interaction device 5, and the position detection device 3, the operation key 4 and the voice interaction device 5 are all positioned below the display screen 2;
the position detecting device 3 is used for detecting the position information between the user terminal housing 1 and the user (specifically, the position information between the user terminal housing 1 and the user includes the relative distance between the user terminal housing 1 and the user, and the difference between the height of the user terminal housing 1 and the height of the sight line of the user);
be provided with position control device on the user terminal housing 1, position control device set up in 1 rear surface of user terminal housing for position between user terminal housing 1 and the user is adjusted to position information between user terminal housing 1 and the user that detects according to detection device 3, position control device includes:
the device comprises a fixed plate 6, a moving plate 7, a motor 8, an output shaft 9, a first transmission shaft 10, a first belt wheel 11, a first gear 12, a clutch 13, a first lead screw 14, a moving block 15, a fixed sleeve 16, a supporting plate 17, a first connecting rod 18, a second connecting rod 19, a sliding chute 20, a second lead screw 21, a fixed block 22, a second transmission shaft 23, a second belt wheel 24, a synchronous belt 25, a second gear 26 and a third transmission shaft 27;
the motor 8 is fixed on the moving plate 7, the output end of the motor 8 is fixed with one end of an output shaft 9, the other end of the output shaft 9 is fixed with a first belt wheel 11, and the first belt wheel 11 is connected with a second belt wheel 24 through a synchronous belt 25;
one end of the first transmission shaft 10 is fixed with a first belt pulley 11, the other end of the first transmission shaft 10 is fixed with a first gear 12, the first gear 12 is meshed with a second gear 26, the second gear 26 is fixed at one end of a third transmission shaft 27, and the other end of the third transmission shaft 27 is connected with one end of a first lead screw 14 through a clutch 13;
a moving block 15 is sleeved on the first lead screw 14, the moving block 15 is in threaded connection with the first lead screw 14, the other end of the first lead screw 14 is connected to a supporting plate 17, the supporting plate 17 is fixed on the moving plate 7, the fixed sleeve 16 is rotatably sleeved at the other end of the first lead screw 14, and the fixed sleeve 16 is fixed with the supporting plate 17;
one end of the first connecting rod 18 is rotatably connected to the moving block 15, the other end of the first connecting rod 18 is slidably connected to the fixed plate 6, one end of the second connecting rod 19 is rotatably connected to the fixed sleeve 16, and the other end of the second connecting rod 19 is rotatably connected to the fixed plate 6;
the user end shell 1 is fixed with the fixing plate 6;
one end of the second transmission shaft 23 is fixed on a second belt pulley 24, the other end of the second transmission shaft 23 is connected with a second lead screw 21 through a clutch 13, and the other end of the second lead screw 21 is connected with the supporting plate 17;
the movable plate 7 is provided with a sliding groove 20, the fixed block 22 penetrates through the sliding groove 20, the fixed block 22 is sleeved on the second lead screw 21, and the fixed block 22 is in threaded connection with the second lead screw 21.
The working principle and the beneficial effects of the technology are as follows: according to the invention, the motor 8 is used for providing power for the first lead screw 14 and the second lead screw 21, and the clutch 13 is arranged between the first lead screw 14 and the second lead screw 21 for closing and opening the power, so that the rotation of the first lead screw 14 and the second lead screw 21 is not interfered with each other, the second lead screw 21 rotates on the fixed block 22 to drive the movable plate 7 to move up and down, the distance between the movable block 15 and the fixed sleeve 16 is changed through the rotation of the first lead screw 14, the distance between the fixed plate 6 and the movable plate 7 is further influenced, and finally, the position of the user end shell 1 can be automatically adjusted according to the position of a user detected by the position detection device 3;
through setting up control button 4, display screen 2, voice interaction device 5, realize making the user input operating command through network login visual detection service platform to the historical test data generation individualized detection plan of user based on operating command and the visual detection service platform of input, detect the positional information between user end housing 1 and the user through position detection device 3, and then change the relative position between user end housing 1 and the user through position adjusting device, so that the position between user end housing 1 and the user is the optimum position.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A remote intelligent vision detection method is characterized by comprising the following steps:
s1: the user side acquires the login authority of the vision detection service platform and inputs an operation instruction;
s2: retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user;
s3: generating a corresponding detection plan based on the historical detection data and the operation instruction;
s4: executing the detection plan, and receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
s6: and updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list.
2. The remote intelligent vision testing method of claim 1, wherein S1: the user side obtains the login authority of the vision detection service platform and inputs an operation instruction, and the method comprises the following steps:
receiving a first user type feedback instruction and a second user type feedback instruction input by the user side, wherein the first user type comprises: medical care end, user end, head of a family end, the second user type includes: historical users, new users;
determining a corresponding login recommendation mode based on the first user feedback instruction, and displaying the login recommendation mode to a login mode selection page;
receiving a login mode selection feedback instruction input by the user side;
determining a login mode corresponding to the user side based on the login mode selection feedback instruction;
determining a corresponding verification mode based on the second user type feedback instruction;
generating a corresponding login page based on the verification mode and the login mode, and jumping a display page of the user side to the login page;
receiving initial login information input by the user side from the login page, verifying whether the initial login information is correct or not, and if so, sending an authority acquisition success instruction to the user side;
otherwise, sending an authority acquisition failure instruction to the user side, and simultaneously sending a user type secondary confirmation instruction to the user side;
determining a secondary login mode and a secondary verification mode based on the secondary confirmation instruction;
generating a secondary login page based on the secondary login mode and the secondary verification mode, and skipping a display page of the user side to the secondary login page;
receiving secondary login information input by the user side from the secondary login page, verifying whether the secondary login information is correct or not, and if so, sending an authority acquisition success instruction to the user side;
otherwise, sending a permission secondary acquisition failure instruction to the user side, and skipping a display page of the user side to a new user registration page;
receiving new user registration information input by the user side from the new user registration page, storing the new user registration information into a user information verification library, and sending an authority acquisition success instruction to the user side;
and when the user side authority is successfully acquired, acquiring an operation instruction input by the user side.
3. The method of claim 2, wherein the obtaining of the operation command inputted by the user side comprises:
when the user side authority is successfully acquired or a new user is successfully registered, a detection type selection instruction and a detection range selection instruction are sent to the user side;
and receiving a detection type selection feedback instruction and a detection range selection feedback instruction input by the user side.
4. The remote intelligent vision testing method of claim 3, wherein S2: retrieving corresponding historical detection data in the vision detection service platform based on the user type of the current user to obtain a retrieval result, wherein the retrieval result comprises the following steps:
s201: when the user side authority is successfully acquired and the second user type of the user side is a historical user, acquiring login information corresponding to the user side;
s202: determining final login information in the login information;
s203: determining a corresponding search term chain in the final login information, and searching corresponding historical detection information on the basis of the search term chain on the vision detection service platform;
s204: screening out historical detection data of corresponding detection types from the historical detection information based on the detection type feedback instruction;
wherein the login information comprises: primary login information and secondary login information.
5. The remote intelligent vision testing method of claim 3, wherein S3: generating a corresponding detection plan based on the historical detection data and the operation instruction, including:
analyzing the historical detection data to obtain corresponding detection time intervals and detection data expansion amplitudes between adjacent historical detection data;
obtaining an amplitude relation coefficient corresponding to each detection time interval based on each detection time interval and the corresponding detection data amplitude;
obtaining a corresponding expansion relation coefficient fitting curve based on the expansion relation coefficient;
determining the latest time interval between the latest detection time in the historical detection data and the current time;
predicting a corresponding latest range of the coefficient of the amplitude relationship based on the fitting curve of the coefficient of the amplitude relationship and the latest time interval;
determining the amplitude range of the latest detection data based on the latest amplitude relationship coefficient range and the latest time interval;
determining a first detection range based on the latest detection data in the latest detection data amplitude range and the historical detection data;
analyzing the detection range selection feedback instruction to obtain a corresponding user selection detection range;
judging whether the first detection range contains the user-selected detection range, if so, taking the first detection range as an initial detection range;
otherwise, taking the collection of the first detection range and the user selection detection range as an initial detection range;
and generating a corresponding detection plan based on the initial detection range and the corresponding detection category.
6. The method of claim 5, wherein generating a corresponding test plan based on the initial test range and the corresponding test category comprises:
determining a corresponding photometric range and measurement distance based on the corresponding detection category;
generating a corresponding detection word sequence list based on the initial detection range;
generating a corresponding first detection plan based on the luminosity range and the measurement distance and the detection word sequence list;
generating a corresponding first expense confirmation list based on the first detection plan and sending the first expense confirmation list to the user side;
receiving a confirmation feedback instruction input by the user side;
if the confirmation feedback instruction is confirmation detection, taking the first detection plan as a corresponding detection plan;
if the confirmation feedback instruction is used for adjusting the detection range or adjusting the detection type, adjusting the first detection plan based on the confirmation feedback instruction, generating a corresponding second detection plan, sending a second expense confirmation list corresponding to the second detection plan to the user side, and taking the second detection plan as the corresponding detection plan until receiving a feedback confirmation instruction input by the user side.
7. The remote intelligent vision testing method of claim 6, wherein S4: executing the detection plan, and receiving a direction feedback instruction input by the user side, wherein the direction feedback instruction comprises the following steps:
s401: generating a corresponding voice prompt list based on the detected word order list and the feedback receiving time;
s402: broadcasting prompt voice based on the voice prompt list, and receiving a direction feedback instruction input by the user side;
s5: adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained, including:
analyzing the direction feedback instruction in real time to obtain a direction feedback result;
judging whether the direction feedback result is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
if not, rebroadcasting the newly broadcasted prompt voice and receiving a secondary direction feedback instruction;
judging whether a secondary direction feedback result corresponding to the secondary direction feedback instruction is consistent with the direction of the corresponding detection word or not, if so, continuing to broadcast a prompt voice based on the voice prompt list until an initial detection result is obtained;
otherwise, determining an initial detection result based on the detection words corresponding to the latest broadcasted prompt voice.
8. The remote intelligent vision testing method of claim 1, wherein S6: updating the initial detection result to the vision detection service platform, and generating a corresponding payment instruction and an intelligent recommendation list, wherein the method comprises the following steps:
storing the initial detection result and the current detection time into historical detection data corresponding to the detection types in the vision detection service platform;
generating a corresponding payment instruction based on the expense confirmation list finally sent to the user side, and sending the payment instruction to the user side;
calling all historical detection data of the user side, and performing structured storage on the historical detection data according to detection types to generate an all-directional detection list corresponding to the user side;
analyzing historical detection data to determine danger coefficients corresponding to detection types;
and screening out corresponding recommended items from an item library based on the detection types and the corresponding danger coefficients, generating a corresponding intelligent recommendation list based on all the recommended items, and sending the intelligent recommendation list to the user side.
9. A remote intelligent vision testing system, comprising:
the input module is used for the user side to acquire the login authority of the vision detection service platform and input an operation instruction;
the retrieval module is used for retrieving corresponding historical detection data on the vision detection service platform based on the user type of the current user;
the generating module is used for generating a corresponding detection plan based on the historical detection data and the operation instruction;
the execution module executes the detection plan and receives a direction feedback instruction input by the user side;
the adjusting module is used for adjusting the detection plan in real time based on the direction feedback instruction until an initial detection result is obtained;
and the output module is used for updating the initial detection result to the vision detection service platform and generating a corresponding payment instruction and an intelligent recommendation list.
10. A remote intelligent vision test apparatus for implementing a remote intelligent vision test method as claimed in any one of claims 1-8, said remote intelligent vision test apparatus comprising:
the display screen display device comprises a user side shell (1) and a display screen (2), wherein the display screen (2) is arranged on the front surface of the user side shell (1), and four sides of the display screen (2) are wrapped by the user side shell (1);
the surface of the user side shell (1) is also provided with a position detection device (3), an operation key (4) and a voice interaction device (5), and the position detection device (3), the operation key (4) and the voice interaction device (5) are all positioned below the display screen (2);
the position detection device (3) is used for detecting the position information between the user end shell (1) and a user;
be provided with position control device on user terminal shell (1), position control device set up in user terminal shell (1) rear surface for position between user terminal shell (1) and the user is adjusted according to the position information between user terminal shell (1) that detection device (3) detected and the user, position control device includes:
the device comprises a fixing plate (6), a moving plate (7), a motor (8), an output shaft (9), a first transmission shaft (10), a first belt wheel (11), a first gear (12), a clutch (13), a first lead screw (14), a moving block (15), a fixing sleeve (16), a supporting plate (17), a first connecting rod (18), a second connecting rod (19), a sliding groove (20), a second lead screw (21), a fixing block (22), a second transmission shaft (23), a second belt wheel (24), a synchronous belt (25), a second gear (26) and a third transmission shaft (27);
the motor (8) is fixed on the moving plate (7), the output end of the motor (8) is fixed with one end of an output shaft (9), a first belt wheel (11) is fixed at the other end of the output shaft (9), and the first belt wheel (11) is connected with a second belt wheel (24) through a synchronous belt (25);
one end of the first transmission shaft (10) is fixed with a first belt wheel (11), the other end of the first transmission shaft (10) is fixed with a first gear (12), the first gear (12) is meshed with a second gear (26), the second gear (26) is fixed at one end of a third transmission shaft (27), and the other end of the third transmission shaft (27) is connected with one end of a first lead screw (14) through a clutch (13);
a moving block (15) is sleeved on the first lead screw (14), the moving block (15) is in threaded connection with the first lead screw (14), the other end of the first lead screw (14) is connected to a supporting plate (17), the supporting plate (17) is fixed to the moving plate (7), the fixed sleeve (16) is rotatably sleeved at the other end of the first lead screw (14), and the fixed sleeve (16) is fixed to the supporting plate (17);
one end of the first connecting rod (18) is rotatably connected to the moving block (15), the other end of the first connecting rod (18) is slidably connected to the fixed plate (6), one end of the second connecting rod (19) is rotatably connected to the fixed sleeve (16), and the other end of the second connecting rod (19) is rotatably connected to the fixed plate (6);
the user side shell (1) is fixed with the fixing plate (6);
one end of the second transmission shaft (23) is fixed on a second belt wheel (24), the other end of the second transmission shaft (23) is connected with a second lead screw (21) through a clutch (13), and the other end of the second lead screw (21) is connected with the supporting plate (17);
the movable plate (7) is provided with a sliding groove (20), the fixed block (22) penetrates through the sliding groove (20), the fixed block (22) is sleeved on the second lead screw (21), and the fixed block (22) is in threaded connection with the second lead screw (21).
CN202210657207.0A 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device Active CN115054198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210657207.0A CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210657207.0A CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Publications (2)

Publication Number Publication Date
CN115054198A true CN115054198A (en) 2022-09-16
CN115054198B CN115054198B (en) 2023-07-21

Family

ID=83199917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210657207.0A Active CN115054198B (en) 2022-06-10 2022-06-10 Remote intelligent vision detection method, system and device

Country Status (1)

Country Link
CN (1) CN115054198B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113717A1 (en) * 2016-10-21 2018-04-26 Inno Stream Technology Co., Ltd. Method for processing innovation-creativity data information, user equipment and cloud server
CN108960166A (en) * 2018-07-11 2018-12-07 谢涛远 A kind of vision testing system, method, terminal and medium
CN109330555A (en) * 2018-09-14 2019-02-15 侯尧珍 A kind of intelligent eyesight detection based on cloud computing and training correction system
CN109893080A (en) * 2019-03-26 2019-06-18 张旭 A kind of intelligent interactive method of self-service measurement eyesight
CN110393503A (en) * 2019-07-18 2019-11-01 苏州国科康成医疗科技有限公司 Vision inspection system with cloud service function
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN112656363A (en) * 2020-12-17 2021-04-16 上海艾康特医疗科技有限公司 Vision detection system and vision detection method
CN112932402A (en) * 2021-02-07 2021-06-11 浙江工贸职业技术学院 Self-service vision screening system based on artificial intelligence and intelligent perception
CN114190880A (en) * 2021-12-09 2022-03-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic equipment and storage medium
CN114468973A (en) * 2022-01-21 2022-05-13 广州视域光学科技股份有限公司 Intelligent vision detection system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113717A1 (en) * 2016-10-21 2018-04-26 Inno Stream Technology Co., Ltd. Method for processing innovation-creativity data information, user equipment and cloud server
CN108960166A (en) * 2018-07-11 2018-12-07 谢涛远 A kind of vision testing system, method, terminal and medium
CN109330555A (en) * 2018-09-14 2019-02-15 侯尧珍 A kind of intelligent eyesight detection based on cloud computing and training correction system
CN109893080A (en) * 2019-03-26 2019-06-18 张旭 A kind of intelligent interactive method of self-service measurement eyesight
CN110393503A (en) * 2019-07-18 2019-11-01 苏州国科康成医疗科技有限公司 Vision inspection system with cloud service function
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN112656363A (en) * 2020-12-17 2021-04-16 上海艾康特医疗科技有限公司 Vision detection system and vision detection method
CN112932402A (en) * 2021-02-07 2021-06-11 浙江工贸职业技术学院 Self-service vision screening system based on artificial intelligence and intelligent perception
CN114190880A (en) * 2021-12-09 2022-03-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic equipment and storage medium
CN114468973A (en) * 2022-01-21 2022-05-13 广州视域光学科技股份有限公司 Intelligent vision detection system

Also Published As

Publication number Publication date
CN115054198B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
US7542832B2 (en) Vehicle management system and method in telematics system
JP5628648B2 (en) Surveying device and surveying device communication system including the same
US20240019412A1 (en) Method for dynamic measurement of individual tree carbon sink
KR101647423B1 (en) System, server and method for diagnosing electric power equipments automatically
CN101366047B (en) Iris identification system and method using mobile device with stereo camera
CN102611860B (en) Method and device for selecting channel by sound
CN107741575B (en) Intelligent detection system and detection method for beacon light quality
WO2017141225A2 (en) Method for diagnosing/managing new renewable energy facility using mobile terminal and system therefor
CN105577748A (en) Environment information acquisition method and system based on group perception technology
US11869023B1 (en) Continuous monitoring method and system for forest stock and execution method therefor
Memola et al. Dark matter halos around isolated ellipticals
CN109579846A (en) Mixing floor location method based on the identification of floor switching behavior
US7752234B2 (en) Method and apparatus for auditing utility poles
CN115054198B (en) Remote intelligent vision detection method, system and device
JP2011075468A (en) Specimen inspection apparatus
CN115129810A (en) Service life evaluation system based on equipment fault detection
US20140336988A1 (en) Service module for a level measuring device and automated service method
CN102778444A (en) Device and method capable of simultaneously measuring multiple index parameters of pear
CN112883271A (en) Course education system based on analytic hierarchy process evaluation and course recommendation method
US11227174B1 (en) License plate recognition
JPS60123788A (en) Automatic surveying method and apparatus therefor
CN109461202B (en) Electric power optical fiber prediction instrument based on image recognition and method for judging influence of electric power optical fiber curvature on performance
CN111272671A (en) Automatic wavelength selection water quality detection system
CN104076320B (en) Electric energy meter automatic checkout system
Moscardini et al. Constraining cosmological parameters with the clustering properties of galaxy clusters in optical and X-ray bands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant