CN117547217A - Man-machine interaction vision detection method and system based on virtual reality - Google Patents

Man-machine interaction vision detection method and system based on virtual reality Download PDF

Info

Publication number
CN117547217A
CN117547217A CN202311576502.4A CN202311576502A CN117547217A CN 117547217 A CN117547217 A CN 117547217A CN 202311576502 A CN202311576502 A CN 202311576502A CN 117547217 A CN117547217 A CN 117547217A
Authority
CN
China
Prior art keywords
detection
data
vision
test
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311576502.4A
Other languages
Chinese (zh)
Inventor
彭文平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Ruoying Technology Co ltd
Original Assignee
Changzhou Ruoying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Ruoying Technology Co ltd filed Critical Changzhou Ruoying Technology Co ltd
Priority to CN202311576502.4A priority Critical patent/CN117547217A/en
Publication of CN117547217A publication Critical patent/CN117547217A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application relates to a human-computer interaction vision detection method and a human-computer interaction vision detection system based on virtual reality, which belong to the technical field of vision detection, wherein the detection method comprises a human-computer interaction vision detection bin, the human-computer interaction vision detection bin comprises four detection areas, each detection area is provided with a human-computer interaction detection screen, and the human-computer interaction detection screen is used as an execution main body; the detection method comprises the following steps: receiving a wake-up signal; receiving height data of a person to be tested; based on the height data, adjusting the height of the man-machine interaction detection screen; receiving the image information of the person to be detected; after receiving the start instruction, broadcasting a test operation instruction; and after the test is finished, generating a vision testing sheet based on the image information. The application has the beneficial effect of improving detection efficiency.

Description

Man-machine interaction vision detection method and system based on virtual reality
Technical Field
The application relates to the technical field of vision testing, in particular to a man-machine interaction vision testing method and system based on virtual reality.
Background
In practical vision detection, it is found that the person to be tested and the commander are generally in one-to-one configuration or in one-to-many configuration, namely, one commander corresponds to one person to be tested or a plurality of persons to be tested, and the person to be tested is tested sequentially during the test, so that the device is unfriendly to the person to be tested and the commander in places with more persons to be tested, such as hospitals, and the like, and the person to be tested and the commander are tired and have lower detection efficiency.
Disclosure of Invention
In order to improve detection efficiency, the application provides a man-machine interaction vision detection method and system based on virtual reality.
In a first aspect, the present application provides a human-computer interaction vision detection method based on virtual reality, which adopts the following technical scheme:
the human-computer interaction vision detection method based on virtual reality comprises a human-computer interaction vision detection bin, wherein the human-computer interaction vision detection bin comprises four detection areas, each detection area is provided with a human-computer interaction detection screen, and the human-computer interaction detection screen is used as an execution main body; the detection method comprises the following steps:
receiving a wake-up signal;
receiving height data of a person to be tested;
based on the height data, adjusting the height of the man-machine interaction detection screen;
receiving the image information of the person to be detected;
after receiving the start instruction, broadcasting a test operation instruction;
and after the test is finished, generating a vision testing sheet based on the image information.
By adopting the technical scheme, as the man-machine interaction vision detection bin is divided into four detection areas, 4 people to be detected can be detected at the same time, after the people to be detected enter the detection areas, the corresponding man-machine interaction detection screen can be awakened, height data of the people to be detected are received, and the height of the people to be detected is adjusted according to the height data; the method comprises the steps that image information of a person to be tested is obtained, after a start instruction sent by the person to be tested is received, a man-machine interaction detection screen broadcasts a test operation instruction, the person to be tested is tested according to the test operation instruction, and after the test is completed, a vision detection list is generated by combining the image information; because a plurality of people can be tested at the same time, and the man-machine interaction detection screen can conduct guiding test on the person to be tested without additional commander, the method is friendly to the person to be tested and the commander, and the detection efficiency is improved.
Optionally, the detection method further includes:
after each time of broadcasting the test operation instruction, receiving a feedback instruction;
judging whether the feedback time length of the received feedback instruction is within a time length threshold value range or not;
if not, adjusting the subsequent test operation instruction.
Optionally, the adjusting the subsequent test operation instruction specifically includes:
if the feedback time length is greater than the highest value of the time length threshold range, testing according to the sequence of the visual acuity chart from top to bottom;
and if the feedback time length is smaller than the minimum value of the time length threshold range, testing according to the order of the visual acuity chart from bottom to top.
By adopting the technical scheme, if the feedback time length is greater than the highest value of the time length threshold range, the fact that the person to be tested does not see clearly for the current line is indicated, so that the test can be started downwards from the first line or the line close to the first line of the visual chart; if the feedback time length is smaller than the minimum value of the time length threshold range, the fact that the person to be tested looks very clearly for the current line is indicated, so that the person to be tested can start to test upwards from the last line or the near last line of the visual chart.
Optionally, before broadcasting the test operation instruction, the method includes:
receiving initial eye data of the person to be tested;
searching historical detection data matched with the initial eye data in a preset detection data database;
retrieving test final row data associated with the matched detection data;
and confirming the final test line data with the largest occurrence number as the test head line in the test operation instruction.
By adopting the technical scheme, after the initial eye data of the person to be tested is received, the matched historical detection data are searched from the detection data database, so that a plurality of test final line data are obtained, the plurality of test final line data are the data of a plurality of historical testers, and then the final line data with the largest occurrence number are determined to be the first line of the test in the test operation instruction; therefore, the first test row to be tested by the person to be tested can be determined quickly and roughly, the test time is shortened, and each test is prevented from starting from the first row or from the fixed row.
Optionally, the initial eye data includes diopter and glasses power; the history detection data comprise eye data of all history testers and corresponding associated vision test final line data;
after receiving the degree of the glasses of the person to be tested, judging whether the degree of the glasses is a certain value or a certain range;
if the value is a certain value, directly searching the matched value from the detection database;
if the value is in a certain range, taking the average value of the sum of the two end point values, and searching the matched value from the detection database;
retrieving test final row data associated with the matched value;
and confirming the final test line data with the largest occurrence number as the test head line in the test operation instruction.
Optionally, the detection method further includes:
when a plurality of people to be tested enter the man-machine interaction vision detection bin to wait for detection, judging whether the first test row of each detection area is the same or not;
if yes, a general control instruction is sent;
based on the total control instruction, uniformly broadcasting a test operation instruction;
if not, a single control instruction is sent;
based on the single control instruction, the test operation instruction is independently broadcasted.
Optionally, after generating the vision test chart, the method includes:
receiving a manual service request;
based on the manual service request, the video link is free of the ophthalmic doctor.
By adopting the technical scheme, if the person to be tested takes the vision test list and then some places are unclear or the person to be tested needs to ask for education, a manual service request can be sent, and the man-machine interaction detection screen can be used for connecting the video to an idle ophthalmologist, so that the confusion of the person to be tested can be solved on line.
Optionally, after generating the vision test chart, the method further includes:
receiving a manual inspection request;
judging whether the manual check request exceeds more than half;
if yes, sending prompt information to a terminal of an ophthalmologist, wherein the ophthalmologist quickly descends from upstairs to downstairs, the man-machine interaction vision detection bin is positioned downstairs, and quick descending channels are arranged in the middle of the four detection areas;
if not, broadcasting the position information of the eye department.
By adopting the technical scheme, the ophthalmologist can quickly descend to the downstairs through the quick descent channel, so that the personnel to be detected needing manual inspection can be inspected in sequence.
In a second aspect, the present application provides a human-computer interaction vision detection system based on virtual reality, which adopts the following technical scheme:
a human-computer interaction vision testing system based on virtual reality, comprising:
the human-computer interaction detection screens are distributed in four detection areas, the four detection areas are arranged in human-computer interaction vision detection bins, and the human-computer interaction vision detection bins are arranged in vision detection places in advance;
the human body infrared sensors are distributed in the four detection areas and are used for sending a wake-up signal when detecting a person to be detected;
the distance measuring sensors are distributed at the human body infrared sensors and are used for measuring the height of a person to be measured;
the adjustment module is used for adjusting the height of the man-machine interaction detection screen based on the height data;
the image acquisition module is arranged on the man-machine interaction detection screen and is used for acquiring image information of a person to be detected;
the man-machine interaction detection screen comprises:
the signal receiving module is used for receiving the wake-up signal;
the data receiving module is used for receiving height data;
the information receiving module is used for receiving the image information of the personnel to be detected;
the instruction receiving module is used for receiving a start instruction;
the voice broadcasting module is used for broadcasting a test operation instruction after receiving the start instruction;
and the detection list generation module is used for generating a vision detection list based on the image information after the test is completed.
By adopting the technical scheme, as the man-machine interaction vision detection bin is divided into four detection areas, 4 people to be detected can be detected at the same time, after the people to be detected enter the detection areas, the corresponding man-machine interaction detection screen can be awakened, height data of the people to be detected are received, and the height of the people to be detected is adjusted according to the height data; the method comprises the steps that image information of a person to be tested is obtained, after a start instruction sent by the person to be tested is received, a man-machine interaction detection screen broadcasts a test operation instruction, the person to be tested is tested according to the test operation instruction, and after the test is completed, a vision detection list is generated by combining the image information; because a plurality of people can be tested at the same time, and the man-machine interaction detection screen can conduct guiding test on the person to be tested without additional commander, the method is friendly to the person to be tested and the commander, and the detection efficiency is improved.
Optionally, the man-machine interaction detection screen further includes:
the judging module is used for judging whether the feedback time length of the received feedback instruction is within a time length threshold value range or not after each time of broadcasting the test operation instruction;
and the adjusting module is used for adjusting the subsequent test operation instruction when the feedback time length is not within the time length threshold value range.
Optionally, the man-machine interaction detection screen further includes:
the data receiving module is used for receiving initial eye data of a person to be tested;
the searching module is used for searching historical detection data matched with the initial eye data in a preset detection data database;
the calling module is used for calling the test final line data associated with the matched detection data;
and the determining module is used for determining the final test row data with the largest occurrence number as the test head row in the test operation instruction.
By adopting the technical scheme, the test head line can be determined quickly and probably based on the detection data database, so that the detection efficiency is improved, and the situation that each test starts from the first line or from the fixed line is avoided.
In summary, the present application has at least the following beneficial effects:
1. because the man-machine interaction vision detection bin is divided into four detection areas, 4 people to be detected can be detected at the same time, after the people to be detected enter the detection areas, the corresponding man-machine interaction detection screen can be awakened, height data of the people to be detected are received, and the height of the people to be detected is adjusted according to the height data; the method comprises the steps that image information of a person to be tested is obtained, after a start instruction sent by the person to be tested is received, a man-machine interaction detection screen broadcasts a test operation instruction, the person to be tested is tested according to the test operation instruction, and after the test is completed, a vision detection list is generated by combining the image information; because a plurality of people can be tested at the same time, and the man-machine interaction detection screen can conduct guiding test on the person to be tested without additional commander, the method is friendly to the person to be tested and the commander, and the detection efficiency is improved.
2. The test head line can be determined quickly and roughly based on the detection data database, so that the detection efficiency is improved, the detection time is shortened, and each test is prevented from starting from the first line or from the fixed line.
3. If the person to be tested gets the vision test list, some places are not clear, or the person to be tested needs to ask for education, a manual service request can be sent, and the man-machine interaction detection screen can enable the ophthalmic doctor with the video connection wire free, so that the confusion of the person to be tested can be solved on line.
Drawings
FIG. 1 is a block flow diagram of an embodiment of a method of the present application;
FIG. 2 is a block flow diagram of steps that may be performed prior to outputting test operation instructions;
FIG. 3 is a block flow diagram of another implementation of the method embodiment of the present application;
FIG. 4 is a block flow diagram of yet another implementation of the method embodiment of the present application;
FIG. 5 is a block diagram of an embodiment of a system of the present application;
FIG. 6 is a block diagram of another embodiment of a human-machine interaction detection screen;
fig. 7 is a schematic structural diagram of a human-computer interaction vision testing cabin.
Reference numerals illustrate: 110. a man-machine interaction detection screen; 111. a signal receiving module; 112. a data receiving module; 113. an information receiving module; 114. an instruction receiving module; 115. a voice broadcasting module; 116. a detection list generation module; 118. a searching module; 119. a calling module; 120. a determining module; 121. a judging module; 122. an adjustment module; 123. an instruction sending module; 124. a request receiving module; 125. a calling module; 126. an information transmitting module; 130. a human body infrared sensor; 140. a ranging sensor; 150. an adjustment module; 160. and an image acquisition module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to fig. 1 to 7 in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the application discloses a man-machine interaction vision detection method based on virtual reality. Referring to fig. 1, as an embodiment of the detection method, the detection method may include S110 to S160:
s110, receiving a wake-up signal;
s120, receiving height data of a person to be tested;
s130, adjusting the height of the human-computer interaction detection screen based on the height data;
s140, receiving image information of a person to be detected;
s150, after receiving the start instruction, broadcasting a test operation instruction;
and S160, after the test is completed, generating a vision testing sheet based on the image information.
Specifically, a man-machine interaction vision detection bin is arranged in a vision detection place and is divided into four detection areas, and each detection area is provided with a man-machine interaction detection screen. And each detection area is internally provided with a human body infrared sensor, and the human body infrared sensor can send a wake-up signal after detecting a person to be detected. In addition, a ranging sensor is arranged near each human body infrared sensor, and the ranging sensor is used for detecting the height of a person to be detected; and a camera is arranged on the man-machine interaction detection screen, and the camera acquires image information of a person to be detected. Each man-machine interaction detection screen corresponds to a group of adjustment modules, and the adjustment modules adjust the height of the man-machine interaction detection screen according to the height of a person to be tested so as to meet the test requirement. In addition, a shielding plate is also slipped in each detection area and used for shielding eyes of the person to be detected. After the man-machine interaction detection screen is awakened, the vision detection table is called; after the personnel to be tested are ready, a starting instruction can be sent in a voice mode. The shielding plate shields the left eye or the right eye of the person to be tested; then the man-machine interaction detection screen outputs a test operation instruction through voice; for example, when a person to be measured looks at a character, the character becomes bright; or the current character is expressed in the form of voice; for example, the first row first; the test operation instruction can be stored in the man-machine interaction detection screen by an instructor in advance. S140 may be performed after the test is completed. The vision testing sheet comprises vision testing results, associated eye care suggestions and the like.
In addition, because the commander usually confirms the first line of test from the fixed line or according to his own subjective understanding when testing eyesight, therefore it is possible to need longer test time, can confirm the eyesight of the person to be tested finally after the multi-line test; therefore, in order to shorten the test time, the test of the personnel to be tested is completed rapidly; therefore, referring to fig. 2, before the man-machine interaction detection screen broadcasts the test operation instruction, the following steps S210 to S240 need to be executed:
s210, receiving initial eye data of a person to be tested;
s220, searching historical detection data matched with the initial eye data in a preset detection data database;
s230, calling test final row data associated with the matched detection data;
s240, confirming that the final test row data with the largest occurrence number is the test head row in the test operation instruction.
Specifically, the initial eye data includes diopter, spectacle power, and the like. The detection data database stores detection data such as eye data of all historical testers, corresponding associated vision test final line data and the like. The historical detection data are the data of all historical testers of the current test center and the stored collection of the test data of all the historical testers, which are called from the ophthalmologic hospital within the preset range of the square circle.
The final test row data represent the positions of the testers in the vision test chart when the vision test is finished and the naked eyes are finally determined. The test head line indicates the position at which the test person starts the test at the eye chart.
One way is as follows:
after the person to be tested enters the detection area, the man-machine interaction detection screen is awakened, at the moment, the man-machine interaction detection screen can display the fuzzy data and the accurate data, if the vision or the glasses degree of the person to be tested are not clear, the fuzzy data module can be clicked, so that the approximate vision or the eyes degree can be input, and the input data can be a certain value or a range. If the person to be tested has clear own vision or eye degree, the "accurate data" module can be clicked.
For "fuzzy data":
if the value is a certain value, the man-machine interaction detection screen automatically searches similar data in a preset detection data database; and judging the final line data of the test with the largest occurrence number in the searched similar data to determine the first line of the test, namely, directly starting the test from the line during the test.
If the range is the range of the glasses degree, taking the average value of the sum of the two end point values, and further searching similar data from the detection data database for judgment.
If the range is the range of the vision data, the minimum value is taken, and then similar data is searched from the detection database for judgment.
The similar data refers to data with errors within a preset error range threshold, for example, if the glasses degree is equal to or greater than 100 degrees of the input data and less than 100 degrees of the input data, the data are similar data; if the vision data is the vision data, the data larger than the input vision data 0.1 and smaller than the input vision data 0.1 are all similar data.
For "precision data":
the man-machine interaction detection screen searches the same data in the detection data database and then judges.
Another way is as follows:
if the person to be tested does not know the vision or the glasses degree of the person to be tested, the diopter can be detected by an instrument (such as an optometry instrument, a vision screening instrument and the like) for detecting diopter, which is arranged at the man-machine interaction detection screen. Because diopter and vision are not in a linear relation, after the diopter is acquired by the man-machine interaction detection screen, the diopter is input into a neural network model which is trained in advance to obtain an output value, similar historical values are screened from a detection data database according to the output value, so that corresponding vision data are preliminarily determined, and a final test row associated with the vision data is determined as a test head row. The similarity value refers to that the error between the output value and the historical value in the database is not greater than a preset error threshold.
The method can quickly determine the first test row to be tested by the personnel to be tested, so that the first test row is prevented from being tested from the fixed row each time, and the first test row is relatively close to the final test row of the personnel to be tested, so that the test time can be greatly shortened, and the detection efficiency is improved; the method is particularly suitable for the scenes of queuing waiting tests of testers like physical examination and the like, so that the queuing time of the testers to be tested can be greatly reduced, and the testing time is saved.
Finally, it is to be noted that after each time of broadcasting the test operation instruction, whether the feedback duration of the received feedback instruction is within the duration threshold range is judged, if yes, the test is normally performed according to the test operation instruction; if not, the subsequent test operation instruction needs to be adjusted. The feedback time length refers to the time length from when the test operation instruction is broadcasted to when the test operation instruction starts to be timed until the feedback time length of the personnel to be tested is received.
The adjusting the subsequent test operation instruction specifically comprises:
if the feedback time length is greater than the highest value of the time length threshold range, testing according to the sequence of the visual acuity chart from top to bottom; and if the feedback time length is smaller than the minimum value of the time length threshold range, testing according to the order of the visual acuity chart from bottom to top.
Referring to fig. 3, as another embodiment of the detection method, the detection method may further include S310 to S350:
s310, judging whether the first test row of each detection area is the same or not when a plurality of personnel to be detected enter a man-machine interaction vision detection bin to wait for detection;
s320, if yes, a total control instruction is sent;
s330, based on the total control instruction, uniformly broadcasting a test operation instruction;
s340, if not, a single control instruction is sent;
s350, based on the single control instruction, the test operation instruction is independently broadcasted.
Specifically, if the first test lines of each detection area are the same, it is indicated that the eye data of the plurality of people to be tested are similar, so that the plurality of people to be tested can be uniformly tested through the general control instruction, and if the first test lines of the plurality of people to be tested are different, each detection area is independently tested.
After generating the vision test chart, one embodiment may perform the steps of:
and receiving a manual service request, and based on the manual service request, connecting the video with the free ophthalmologist.
Specifically, the man-machine interaction detection screen can receive state information of an ophthalmologist in real time, and whether the ophthalmologist is free or busy is judged according to the state information.
Referring to fig. 4, after the vision test chart is generated, another embodiment may perform the following steps S410 to S440:
s410, receiving a manual inspection request;
s420, judging whether the manual check request exceeds more than half;
s430, if yes, sending prompt information to a terminal of an ophthalmologist, wherein the ophthalmologist quickly descends from upstairs to downstairs, a man-machine interaction vision detection bin is positioned downstairs, and quick descending channels are arranged in the middle of the four detection areas;
s440, if not, broadcasting the position information of the eye department.
The implementation principle of the embodiment is as follows:
after a person to be detected enters a detection area, receiving a wake-up signal, receiving height data of the person to be detected, and adjusting the height of a human-computer interaction detection screen based on the height data; receiving image information of a person to be detected; and after receiving the start instruction, broadcasting a test operation instruction, and after the test is finished, generating a vision detection list based on the image information.
Based on the method embodiment, a second embodiment of the application discloses a man-machine interaction vision detection system based on virtual reality. Referring to fig. 5, as an embodiment of the detection system, the detection system may include:
the man-machine interaction detection screen 110 is distributed in four detection areas, and the four detection areas are distributed in the man-machine interaction vision detection bin;
the human body infrared sensors 130 are distributed in the four detection areas and are used for sending a wake-up signal when detecting a person to be detected;
the distance measuring sensors 140 are distributed at the human body infrared sensors 130 and are used for measuring the height of a person to be measured;
the adjusting module 150 adjusts the height of the man-machine interaction detection screen 110 based on the height data; the adjustment module 150 may adopt a linear driving mechanism such as an air cylinder or a screw rod;
the image acquisition module 160 is installed on the man-machine interaction detection screen 110 and is used for acquiring image information of a person to be detected; the image acquisition module 160 may be a camera;
the man-machine interaction detection screen 110 includes:
a signal receiving module 111 for receiving a wake-up signal;
a data receiving module 112 for receiving height data;
an information receiving module 113, configured to receive image information of a person to be detected;
an instruction receiving module 114, configured to receive a start instruction;
the voice broadcasting module 115 is configured to broadcast a test operation instruction after receiving a start instruction;
the detection list generation module 116 is configured to generate a vision detection list based on the image information after the test is completed.
The man-machine interaction detection screen 110 may further include:
a searching module 118, configured to search a preset historical eye database for historical eye data that matches the initial eye data; the initial eye data is received by the data receiving module 112;
a retrieving module 119 for retrieving test final line data associated with the matched historical eye data;
the determining module 120 is configured to determine the final test row data with the largest occurrence number as the first test row in the test operation instruction.
In addition, the human-computer interaction detection screen 110 may further include:
the judging module 121 is configured to judge whether a feedback duration of the received feedback instruction is within a duration threshold range after each broadcasting of the test operation instruction;
the adjusting module 122 is configured to adjust the subsequent test operation instruction when the feedback duration is not within the duration threshold range.
It should be noted that, the man-machine interaction detection screen 110 may further include:
the instruction sending module 123 is configured to send a total control instruction or send a single control instruction.
When a plurality of people to be tested enter the man-machine interaction vision detection bin to wait for detection, the judging module 121 judges whether the first test row of each detection area is the same; if yes, the instruction sending module 123 sends a total control instruction; the voice broadcasting module 115 uniformly broadcasts the test operation instruction based on the total control instruction; if not, the instruction sending module 123 sends a single control instruction; the voice broadcasting module 115 independently broadcasts the test operation instruction based on the single control instruction.
Referring to fig. 6, as another embodiment of the human-computer interaction detection screen 110, the human-computer interaction detection screen 110 may further include:
a request receiving module 124, configured to receive a manual service request;
a call module 125 for an ophthalmic doctor that the video link is free based on the manual service request;
the information sending module 126 is configured to send the prompt information.
It should be noted that, after the request receiving module 124 receives the manual inspection request, the judging module 121 may judge whether the manual inspection request exceeds more than half, if yes, the information sending module 126 may send a prompt message to the terminal of the ophthalmologist; if not, the voice broadcasting module 115 broadcasts the eye department location information.
With reference to fig. 7, a description will be made of a human-computer interaction vision testing cartridge; the human-computer interaction vision detection bin comprises four detection chambers and a sitting diagnosis chamber, wherein the four detection chambers are arranged around the sitting diagnosis chamber and are communicated with the sitting diagnosis chamber; the man-machine interaction detection screen 110 is equivalent to a connection door of a sitting room; initially, the human-computer interaction detection screen 110 blocks communication between the sitting room and the detection room; after the human-computer interaction detection screen 110 rotates, the sitting room is communicated with the detection room. Because the man-machine interaction vision detection warehouse is positioned under the building, and the doctor consultation department is positioned on the building, a quick descent channel is arranged above the sitting room, and the quick descent channel can be understood as a mechanism for realizing up-and-down displacement of an elevator and the like.
The implementation principle of the embodiment is as follows:
after the person to be detected enters the detection area, the signal receiving module 111 receives a wake-up signal, the data receiving module 112 receives height data of the person to be detected, and the adjusting module 150 adjusts the height of the human-computer interaction detection screen 110 based on the height data; the information receiving module 113 receives image information of a person to be detected; the command receiving module 114 receives the start command, the voice broadcasting module 115 outputs a test operation command, and the detection sheet generating module 116 generates a vision detection sheet based on the image information after the test is completed.
The foregoing description of the preferred embodiments of the present application is not intended to limit the scope of the application, which includes abstract and drawings, in which case any of the features disclosed in this specification (including abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.

Claims (10)

1. The human-computer interaction vision detection method based on virtual reality is characterized by comprising a human-computer interaction vision detection bin, wherein the human-computer interaction vision detection bin comprises four detection areas, each detection area is provided with a human-computer interaction detection screen, and the human-computer interaction detection screen is used as an execution main body; the detection method comprises the following steps:
receiving a wake-up signal;
receiving height data of a person to be tested;
based on the height data, adjusting the height of the man-machine interaction detection screen;
receiving the image information of the person to be detected;
after receiving the start instruction, broadcasting a test operation instruction;
and after the test is finished, generating a vision testing sheet based on the image information.
2. The method for detecting human-computer interaction vision based on virtual reality according to claim 1, wherein the detecting method further comprises:
after each time of broadcasting the test operation instruction, receiving a feedback instruction;
judging whether the feedback time length of the received feedback instruction is within a time length threshold value range or not;
if not, adjusting the subsequent test operation instruction.
3. The human-computer interaction vision testing method based on virtual reality according to claim 2, wherein the adjusting the subsequent testing operation instruction specifically comprises:
if the feedback time length is greater than the highest value of the time length threshold range, testing according to the sequence of the visual acuity chart from top to bottom;
and if the feedback time length is smaller than the minimum value of the time length threshold range, testing according to the order of the visual acuity chart from bottom to top.
4. The human-computer interaction vision testing method based on virtual reality according to claim 1, wherein before broadcasting the test operation instruction, the method comprises:
receiving initial eye data of the person to be tested;
searching historical detection data matched with the initial eye data in a preset detection data database;
calling the test final row data in the matched historical detection data;
and confirming the final test line data with the largest occurrence number as the test head line in the test operation instruction.
5. The method for detecting human-computer interaction vision based on virtual reality according to claim 4, wherein the initial eye data comprises diopters and glasses powers; the history detection data comprise eye data of all history testers and corresponding associated vision test final line data;
after receiving the degree of the glasses of the person to be tested, judging whether the degree of the glasses is a certain value or a certain range;
if the value is a certain value, directly searching the matched value from the detection database;
if the value is in a certain range, taking the average value of the sum of the two end point values, and searching the matched value from the detection database;
retrieving test final row data associated with the matched value;
and confirming the final test line data with the largest occurrence number as the test head line in the test operation instruction.
6. The method for detecting human-computer interaction vision based on virtual reality according to claim 4, further comprising:
when a plurality of people to be tested enter the man-machine interaction vision detection bin to wait for detection, judging whether the first test row of each detection area is the same or not;
if yes, a general control instruction is sent;
based on the total control instruction, uniformly broadcasting a test operation instruction;
if not, a single control instruction is sent;
based on the single control instruction, the test operation instruction is independently broadcasted.
7. The method for detecting human-computer interaction vision based on virtual reality according to claim 1, wherein after generating the vision detection sheet, the method comprises the following steps:
receiving a manual service request;
based on the manual service request, the video link is free of the ophthalmic doctor.
8. The method for detecting human-computer interaction vision based on virtual reality according to claim 1, wherein after generating the vision detection sheet, further comprising:
receiving a manual inspection request;
judging whether the manual check request exceeds more than half;
if yes, sending prompt information to a terminal of an ophthalmologist, wherein the ophthalmologist quickly descends from upstairs to downstairs, the man-machine interaction vision detection bin is positioned downstairs, and quick descending channels are arranged in the middle of the four detection areas;
if not, broadcasting the position information of the eye department.
9. Human-computer interaction vision testing system based on virtual reality, characterized by comprising:
the human-computer interaction detection screen (110) is distributed in four detection areas, the four detection areas are arranged in a human-computer interaction vision detection bin, and the human-computer interaction vision detection bin is arranged in a vision detection place in advance;
the human body infrared sensors (130) are distributed in the four detection areas and are used for sending a wake-up signal when detecting a person to be detected;
the distance measuring sensors (140) are distributed at the human body infrared sensors (130) and are used for measuring the height of a person to be measured;
an adjustment module (150) for adjusting the height of the human-computer interaction detection screen (110) based on height data;
the image acquisition module (160) is arranged on the man-machine interaction detection screen (110) and is used for acquiring image information of a person to be detected;
the man-machine interaction detection screen (110) comprises:
a signal receiving module (111) for receiving the wake-up signal;
a data receiving module (112) for receiving the height data;
the information receiving module (113) is used for receiving the image information of the personnel to be detected;
an instruction receiving module (114) for receiving a start instruction;
the voice broadcasting module (115) is used for broadcasting a test operation instruction after receiving the start instruction;
and the detection list generation module (116) is used for generating a vision detection list based on the image information after the test is completed.
10. The virtual reality-based human-machine interaction vision inspection system of claim 8, wherein the human-machine interaction inspection screen (110) further comprises:
the data receiving module (112) is used for receiving initial eye data of a person to be tested;
the searching module (118) is used for searching historical detection data matched with the initial eye data in a preset detection data database;
a retrieval module (119) for retrieving test final row data associated with the matched detection data;
and the determining module (120) is used for determining the final test row data with the largest occurrence number as the first test row in the test operation instruction.
CN202311576502.4A 2023-11-23 2023-11-23 Man-machine interaction vision detection method and system based on virtual reality Pending CN117547217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311576502.4A CN117547217A (en) 2023-11-23 2023-11-23 Man-machine interaction vision detection method and system based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311576502.4A CN117547217A (en) 2023-11-23 2023-11-23 Man-machine interaction vision detection method and system based on virtual reality

Publications (1)

Publication Number Publication Date
CN117547217A true CN117547217A (en) 2024-02-13

Family

ID=89818230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311576502.4A Pending CN117547217A (en) 2023-11-23 2023-11-23 Man-machine interaction vision detection method and system based on virtual reality

Country Status (1)

Country Link
CN (1) CN117547217A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109700424A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of more people eyesight detection simultaneously and monitoring management method and system
CN211381363U (en) * 2019-07-29 2020-09-01 上海广蕴信息科技有限公司 Children physical examination device
CN113491500A (en) * 2021-06-30 2021-10-12 深圳云合科技有限公司 Vision detection system and storage medium
CN116616691A (en) * 2023-05-19 2023-08-22 湖南至真明扬技术服务有限公司 Man-machine interaction vision detection method and system based on virtual reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109700424A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of more people eyesight detection simultaneously and monitoring management method and system
CN211381363U (en) * 2019-07-29 2020-09-01 上海广蕴信息科技有限公司 Children physical examination device
CN113491500A (en) * 2021-06-30 2021-10-12 深圳云合科技有限公司 Vision detection system and storage medium
CN116616691A (en) * 2023-05-19 2023-08-22 湖南至真明扬技术服务有限公司 Man-machine interaction vision detection method and system based on virtual reality

Similar Documents

Publication Publication Date Title
WO2018169330A1 (en) Systems and methods for determining defects in visual field of a user
CN108154866B (en) Display screen system capable of adjusting brightness in real time and brightness real-time adjusting method thereof
JPH08503871A (en) Method and apparatus for testing a subject's perception of a visual stimulus
CN110876611A (en) Remote evaluation method for neurocognitive disorder of old people
CN107811606B (en) Intellectual vision measurer based on wireless sensor network
CN117547217A (en) Man-machine interaction vision detection method and system based on virtual reality
WO2022247067A1 (en) Intelligent eye examination method, device and system
EP3925518A1 (en) Detecting and tracking macular degeneration
CN114468973A (en) Intelligent vision detection system
CN112674711A (en) Voice recognition's achromatopsia weak tone detector
CN109411092A (en) Meibomian gland Laser Scanning Confocal Microscope intellectual analysis assessment system and method based on deep learning
CN109984719A (en) A kind of visual color access function detection method and system
CN108324236A (en) Vision tester
CN217186083U (en) Multifunctional vision tester
CN113491500B (en) Vision detection system and storage medium
CN114010147A (en) Control method of optometry instrument
CN113693550A (en) Full-automatic intelligent comprehensive optometry analysis system and method
CN214906722U (en) Voice recognition's achromatopsia weak tone detector
CN112806953A (en) Automatic vision detection method and system
CN111543933A (en) Vision detection method and device and intelligent electronic equipment
CN219480054U (en) Vision detection device
CN217525086U (en) Vision measuring device and desk lamp device
CN114554172B (en) Visual test display system
CN116069165B (en) Target interface determining method for remote tower optical system
CN116384954B (en) Engineering personnel post condition evaluation method and evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination