CN113509136A - Detection method, vision detection method, device, electronic equipment and storage medium - Google Patents

Detection method, vision detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113509136A
CN113509136A CN202110473883.8A CN202110473883A CN113509136A CN 113509136 A CN113509136 A CN 113509136A CN 202110473883 A CN202110473883 A CN 202110473883A CN 113509136 A CN113509136 A CN 113509136A
Authority
CN
China
Prior art keywords
vision
target object
distance
detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110473883.8A
Other languages
Chinese (zh)
Inventor
姚项军
魏文斌
梁伟
白斯琴
徐捷
王静
肖向春
胡雅斌
董力
王斌
由彬
吕本登
冯杨
付晶
焦永红
虞嘉
王国迁
许珊珊
祖巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongren Hospital
BOE Art Cloud Technology Co Ltd
Original Assignee
Beijing Tongren Hospital
Beijing BOE Art Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tongren Hospital, Beijing BOE Art Cloud Technology Co Ltd filed Critical Beijing Tongren Hospital
Priority to CN202110473883.8A priority Critical patent/CN113509136A/en
Publication of CN113509136A publication Critical patent/CN113509136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0075Apparatus for testing the eyes; Instruments for examining the eyes provided with adjusting devices, e.g. operated by control lever
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0083Apparatus for testing the eyes; Instruments for examining the eyes provided with means for patient positioning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A detection method, a vision detection device, an electronic device and a storage medium. The detection method comprises the following steps: responding to a detection request of a target object, and acquiring contour information to be matched of a specific body part of the target object; determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on the matching degree of the contour information to be matched and the first contour information of the specific body part of the target object; the first contour information is determined based on a target image corresponding to a first predetermined distance, the target image being an image including a specific body part of the target object acquired in a case where a distance between the target object and the terminal device is the first predetermined distance. The detection method can improve the accuracy of distance detection.

Description

Detection method, vision detection method, device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to a detection method, a vision detection method and device, electronic equipment and a storage medium.
Background
With the development of technologies such as computers and the internet, terminal devices such as mobile phones, computers, televisions and the like can realize more and more functions, for example, by installing various types of application programs. In the process of implementing some functions by using the terminal device, it may be necessary to have a specified distance between the user and the terminal device, for example, in a scene of performing eyesight test by using the terminal device, an eyesight table may be displayed on the terminal device, in which case the user needs to stand at a position away from the terminal device for performing the test.
Disclosure of Invention
At least one embodiment of the present disclosure provides a detection method, including: responding to a detection request of a target object, and acquiring contour information to be matched of a specific body part of the target object; determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on the matching degree of the contour information to be matched and first contour information of a specific body part of the target object; the first contour information is determined based on a target image corresponding to the first predetermined distance, and the target image is an image of a specific body part including the target object, which is acquired when the distance between the target object and the terminal device is the first predetermined distance.
For example, the detection method provided in at least one embodiment of the present disclosure further includes: acquiring the target image; acquiring the target image includes: responding to a first detection request of the target object, and detecting the distance between the target object and the terminal equipment; and controlling the terminal equipment to acquire the target image when the distance between the target object and the terminal equipment is the first preset distance.
For example, in a detection method provided in at least one embodiment of the present disclosure, a distance between the target object and the terminal device is detected based on a ranging apparatus.
For example, in a detection method provided in at least one embodiment of the present disclosure, in a process of detecting a distance between the target object and the terminal device, the target object holds a reference object; detecting the distance between the target object and the terminal equipment, comprising: acquiring object contour information of the reference object, wherein the object contour information corresponds to the first preset distance; acquiring a reference image, wherein the reference image is an image containing the reference object; determining profile information to be matched of the reference object based on the reference image; and determining the distance between the reference object and the terminal equipment as the distance between the target object and the terminal equipment based on the matching degree between the contour information to be matched of the reference object and the object contour information.
For example, the detection method provided in at least one embodiment of the present disclosure further includes: and under the condition that the matching degree of the contour information to be matched and the first contour information meets a preset matching condition, determining the distance between the target object and the terminal equipment as the first preset distance.
For example, in a detection method provided by at least one embodiment of the present disclosure, acquiring contour information to be matched of a specific body part of the target object includes: acquiring an image to be matched containing a specific body part of the target object; and determining the contour information to be matched of the specific body part based on the image to be matched.
For example, the detection method provided in at least one embodiment of the present disclosure further includes: under the condition that the distance between the target object and the terminal equipment is smaller than the first preset distance, controlling the terminal equipment to output first prompt information to prompt the target object to move towards the direction far away from the terminal equipment so that the contour information to be matched is matched with the first contour information; and/or controlling the terminal equipment to output second prompt information to prompt the target object to move towards the direction close to the terminal equipment so as to enable the contour information to be matched with the first contour information under the condition that the distance between the target object and the terminal equipment is larger than the first preset distance.
At least one embodiment of the present disclosure further provides a vision testing method, including: determining whether the distance between the target object and the terminal equipment is a first preset distance by using the detection method according to any one of the embodiments of the present disclosure; under the condition that the distance between the target object and the terminal device is the first preset distance, controlling the terminal device to display a first vision detection image corresponding to the first preset distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing a vision value; and performing vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
For example, in a vision testing method provided by at least one embodiment of the present disclosure, controlling the terminal device to display a first vision testing image corresponding to the first predetermined distance includes: determining a display size of each sight icon in each of the sight icon groups based on the first predetermined distance; determining the orientation of each sight icon, wherein the orientation of each sight icon is determined randomly; and generating the first vision detection image based on the display size and orientation of each vision icon.
For example, in a vision inspection method provided by at least one embodiment of the present disclosure, performing vision inspection on the target object based on the vision inspection image includes: acquiring input information of the target object, wherein the input information comprises at least one of voice information, gesture information, eyeball image information, head image information and remote control information; determining an orientation of the sight icon judged by the target object based on the input information; and determining a vision detection result of the target object based on the judged orientation of the target object.
For example, in a vision inspection method provided by at least one embodiment of the present disclosure, performing vision inspection on the target object based on the vision inspection image includes: acquiring the vision state of the target object, wherein the vision state comprises a historical vision detection result and/or vision data input by the target object; and controlling the terminal equipment to start detection from the vision icon group corresponding to the vision state.
For example, in a vision inspection method provided by at least one embodiment of the present disclosure, performing vision inspection on the target object based on the vision inspection image includes: acquiring a face image of the target object in a vision detection process; determining an eye occlusion state of the target object based on the facial image; and under the condition that the eye shielding state does not meet the preset shielding requirement, controlling the terminal equipment to output prompt information and/or suspend vision detection.
For example, the vision testing method provided by at least one embodiment of the present disclosure further includes: when the detection distance included in the detection request is switched from the first predetermined distance to a second predetermined distance, determining second contour information corresponding to the second predetermined distance based on the first contour information; determining whether the distance between the target object and the terminal equipment is the second preset distance or not based on the second contour information; and controlling the terminal device to display a second vision detection image corresponding to the second predetermined distance to perform vision detection on the target object based on the second vision detection image if the distance between the target object and the terminal device is the second predetermined distance.
For example, the vision testing method provided by at least one embodiment of the present disclosure further includes: acquiring eye use behavior information of the target object during the period that the target object uses the terminal equipment; and outputting prompt information under the condition that the eye using behavior information meets a preset eye using condition.
For example, in a vision detection method provided by at least one embodiment of the present disclosure, the specific body part is a face of the target object, and the first contour information is face contour information; or the specific body part is an iris of the target object, and the first contour information is iris contour information.
For example, in a vision testing method provided in at least one embodiment of the present disclosure, the terminal device includes a display unit configured to display the contour information to be matched and the first contour information.
At least one embodiment of the present disclosure further provides a detection apparatus, including: a contour acquisition unit configured to acquire contour information to be matched of a specific body part of a target object in response to a detection request of the target object; and a contour matching unit configured to determine whether a distance between the target object and a terminal device is a first predetermined distance based on a matching degree of the contour information to be matched and first contour information of a specific body part of the target object, the first contour information being determined based on a target image and corresponding to the first predetermined distance, the target image being an image of the specific body part including the target object acquired in a case where the distance between the target object and the terminal device is the first predetermined distance.
At least one embodiment of the present disclosure further provides a vision testing apparatus, including: according to the detection device, the control unit and the vision detection unit as described in any of the embodiments of the present disclosure; a control unit configured to control the terminal device to display a first vision inspection image corresponding to the first predetermined distance, the first vision inspection image including at least one vision icon, in a case where a distance between the target object and the terminal device is the first predetermined distance; and a vision detection unit configured to perform vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
At least one embodiment of the present disclosure further provides a detection apparatus, including: a processor; a memory including one or more computer program modules; wherein the one or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the detection method or vision detection method of any embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides an electronic device, including the detection apparatus and/or the vision detection apparatus according to any embodiment of the present disclosure, and the terminal device; the terminal equipment is configured to display the contour information to be matched and the first contour information.
At least one embodiment of the present disclosure also provides a storage medium for storing non-transitory computer-readable instructions, which can implement the detection method or the vision detection method according to any embodiment of the present disclosure when the non-transitory computer-readable instructions are executed by a computer.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments will be briefly described below, and it is apparent that the drawings in the following description only relate to some embodiments of the present invention and are not limiting on the present invention.
Fig. 1 is a schematic flow chart of a detection method according to some embodiments of the present disclosure;
fig. 2 is a schematic diagram of a target object and a terminal device provided in some embodiments of the present disclosure;
FIG. 3 is a schematic illustration of a target image provided by some embodiments of the present disclosure;
FIG. 4 is a schematic diagram of capturing images to be matched according to some embodiments of the present disclosure;
fig. 5 is a schematic diagram of contour information to be matched and first contour information provided by some embodiments of the present disclosure;
FIG. 6 is a schematic flow chart of acquiring a target image according to some embodiments of the present disclosure;
fig. 7 is a schematic flow chart of a vision testing method according to some embodiments of the present disclosure;
fig. 8A is a schematic illustration of a partial-vision icon provided by some embodiments of the present disclosure;
FIG. 8B is a schematic view of a user interface provided by some embodiments of the present disclosure;
FIG. 9 is a system that may be used to implement the detection methods provided by embodiments of the present disclosure;
fig. 10 is a schematic block diagram of a detection apparatus provided in some embodiments of the present disclosure;
fig. 11 is a schematic block diagram of a vision testing apparatus provided in some embodiments of the present disclosure;
fig. 12 is a schematic block diagram of a detection apparatus provided in some embodiments of the present disclosure;
fig. 13 is a schematic block diagram of an electronic device provided by some embodiments of the present disclosure;
fig. 14 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure; and
fig. 15 is a schematic diagram of a storage medium according to some embodiments of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Also, the use of the terms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
The distance between the user and the terminal device may be detected by using a specific body part of the user, for example, a human face, a hand, and the like, and the following description will use the human face as an example.
For example, a standard face contour may be preset, and it is assumed that the face contour of each user conforms to the standard face contour. And mapping the standard face contour to an image acquired by terminal equipment away from the standard face contour by a preset distance, calculating contour information of the standard face contour in the image acquired by the terminal equipment, and further displaying the standard face contour on the terminal equipment away from the standard face contour by a certain proportion. In the application scenario of vision detection, the predetermined distance may be 3 meters, for example, and the contour information of the standard face contour mapped to the image acquired by the terminal device 3 meters away from the standard face contour is calculated in advance. The contour information in the image may be referred to as standard contour information corresponding to the predetermined distance. In the process of detecting the distance between the user and the terminal equipment, the face image of the user is collected in real time, and if the face contour information in the current face image is matched with the standard contour information, the distance between the user and the terminal equipment is considered to be a preset distance.
However, the face contour of each person is different, and the face contours of many users do not conform to the standard face contour, so that the distance detection result obtained by using the uniform standard face contour to perform distance detection on different users is not accurate, thereby causing inaccuracy of the vision detection result.
At least one embodiment of the present disclosure provides a detection method. The detection method comprises the following steps: responding to a detection request of a target object, and acquiring contour information to be matched of a specific body part of the target object; determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on the matching degree of the contour information to be matched and the first contour information of the specific body part of the target object; the first contour information is determined based on a target image corresponding to a first predetermined distance, the target image being an image including a specific body part of the target object acquired in a case where a distance between the target object and the terminal device is the first predetermined distance.
At least one embodiment of the present disclosure also provides a detection apparatus, an electronic device, and a storage medium corresponding to the detection method.
The detection method provided by the embodiment of the disclosure can obtain the profile information of each user corresponding to the predetermined distance, and determine whether the user is located at the predetermined distance by matching the profile information of each user corresponding to the predetermined distance with the current profile information of the user, so that the accuracy of the distance detection result can be improved, and the problem of inaccurate detection result caused by distance detection of different users by adopting a uniform profile is avoided.
Embodiments of the present disclosure and some examples thereof are described in detail below with reference to the accompanying drawings.
At least one embodiment of the present disclosure provides a detection method, for example, the detection method may be applied to a scene of vision detection or other application scenes requiring distance detection, for example, an interactive game scene, etc., to improve accuracy of a distance detection result, which is not limited in this respect. The following description will be given by taking an example of a scene in which the detection method is applied to vision detection, and other application scenes are similar to this and are not described again.
For example, the detection method can be implemented in software, hardware, firmware or any combination thereof, and is loaded and executed by a processor in equipment such as a mobile phone, a digital camera, a tablet computer, a notebook computer, a desktop computer, a learning machine, a television, a network server and the like, so as to improve the accuracy of the distance detection result and avoid the problem of inaccurate detection result caused by adopting a uniform contour to perform distance detection on different users.
For example, the detection method is applicable to a computing device (e.g., a terminal device described below), which includes any electronic device with a computing function, such as a mobile phone, a digital camera, a notebook computer, a tablet computer, a desktop computer, a learning machine, a television, a web server, and the like, and the detection method can be loaded and executed, which is not limited in this respect by the embodiments of the present disclosure. For example, the computing device may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), and other forms of Processing units, storage units, and the like with data Processing capability and/or instruction execution capability, and the computing device may further have an operating system, an application programming interface (e.g., opengl (open Graphics library), Metal, and the like) installed thereon, and implement the detection method provided by the embodiments of the present disclosure by running codes or instructions. For example, the computing device may further include an output component such as a Display component, for example, a Display unit such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) Display, a Quantum Dot Light Emitting Diode (QLED) Display, and the like, which is not limited in this respect. For example, the display part may display first contour information, contour information to be matched, a vision inspection image, and the like, and embodiments of the present disclosure are not limited thereto.
Fig. 1 is a schematic flow chart of a detection method according to some embodiments of the present disclosure. The following describes in detail a detection method provided by at least one embodiment of the present disclosure with reference to fig. 1. As shown in FIG. 1, in at least one embodiment, the method includes steps S10-S20.
Step S10: and responding to a detection request of the target object, and acquiring contour information to be matched of a specific body part of the target object.
Step S20: and determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on the matching degree of the contour information to be matched and the first contour information of the specific body part of the target object.
For step S20, for example, the first contour information is determined based on the target image corresponding to the first predetermined distance, the target image being an image of a specific body part including the target object acquired with the distance between the target object and the terminal device being the first predetermined distance.
With respect to step S10, the target object may be, for example, a target user, which may be any one of users who uses the detection method when performing, for example, visual acuity detection. The detection method may be implemented by, for example, an application, and the target user refers to, for example, any registered user of the application, and therefore, the target object may be hereinafter referred to as a target user. The specific body part may be at least one of a body part such as a face, a hand, an eye, and the like, for example, and embodiments of the present disclosure are not limited thereto. The following description will be given taking an example in which the specific body part is a face, but the embodiments of the present disclosure are not limited to the face.
Fig. 2 is a schematic diagram of a target object and a terminal device according to some embodiments of the present disclosure. As shown in fig. 2, for example, before performing steps S10 and S20, a target image may be acquired in advance, the target image being an image containing a specific body part of the target object 201, and the target image being captured when the distance between the target object 201 and the terminal device 202 is the first predetermined distance L1. For example, the target image may be an image captured by an image capturing device on the terminal device 202, or may be an image captured by another image capturing device that is close to and flush with the terminal device 202, which is not limited in this respect by the embodiments of the present disclosure. For example, the image capturing device may include a CMOS (complementary metal oxide semiconductor) sensor, a CCD (charge coupled device) sensor, and the like, and may be, for example, a camera, and the like, which may perform image capturing, and embodiments of the present disclosure are not limited thereto.
Fig. 3 is a schematic diagram of a target image provided by some embodiments of the present disclosure. As shown in fig. 2 and 3, after the target image 301 is obtained, an image contour extraction algorithm such as an edge detection algorithm may be used to extract a contour 302 of a specific body part of the target object from the target image 301 to obtain first contour information of the specific body part. For example, the edge detection algorithm may include a Canny edge detection algorithm, a Sobel (Sobel) edge detection algorithm, and the like, and the detailed description may refer to descriptions in the art and will not be described herein again.
For example, in the case where the specific body part is a face, the first contour information may include, for example, information such as the length and width of a face contour, a face shape, and the like; in the case where the specific body part is a hand, the first contour information may include information such as a palm width and a finger length of the hand contour, a hand shape, and the like, for example. Since the target image is photographed when the distance between the target object 201 and the terminal device 202 is the first predetermined distance L1, the first contour information 302 extracted from the target image 301 corresponds to the first predetermined distance L1. Then, the first contour information 302 may be used to perform distance detection on the target object 201, so as to determine whether the user is currently located at the predetermined distance by matching the first contour information of the user itself corresponding to the first predetermined distance with the current contour of the user, thereby improving the accuracy of the distance detection result, and avoiding the problem of inaccurate detection result caused by performing distance detection on different users by using a uniform contour.
For example, in step S10, the detection request may refer to a distance detection request, and in some application scenarios, the detection request may also refer to a request related to an application scenario, for example, in an application scenario for detecting eyesight, the detection request may refer to a eyesight detection request, that is, after the target object inputs the eyesight detection request, distance detection may be triggered first.
For example, in step S10, the obtaining of the contour information to be matched of the specific body part of the target object may include: the method comprises the steps of obtaining an image to be matched of a specific body part including a target object, and determining outline information to be matched of the specific body part based on the image to be matched.
For example, in the detection process, the target object may move back and forth in a direction perpendicular to the display screen of the terminal device, an image to be matched may be collected at intervals of a predetermined time (for example, 0.5 second, and may also be 0.8 second, 0.2 second, and the like, which may be determined according to an actual situation, and is not limited in this embodiment of the present disclosure), and the image to be matched may be an image including a specific body part captured when the target object is located at the current position. For example, in some examples, when the specific body part is the face of the target object, the image to be matched is an image including the face of the target object. For example, the image to be matched may be captured by the terminal device or captured by an image capturing device that is close to and flush with the terminal device, which is not limited in this respect by the embodiments of the present disclosure. For example, each time an image to be matched is acquired, the contour of a specific body part of the target object is extracted from the image to be matched by using an edge detection algorithm or other contour extraction algorithms to obtain contour information to be matched.
Fig. 4 is a schematic diagram of acquiring an image to be matched according to some embodiments of the present disclosure. As shown in fig. 4, when the target user 201 moves to the point a, an image to be matched is acquired, and the contour of a specific body part is extracted from the image to be matched, so as to obtain contour information to be matched.
For example, the terminal device 202 includes a display unit configured to display the contour information to be matched and the first contour information. For example, the display unit may be a display screen of a terminal device, for example, the display screen is a liquid crystal display, an organic light emitting diode display, a quantum dot light emitting diode display, or the like, and embodiments of the present disclosure are not limited thereto. For example, the display unit may also display a vision test image and the like mentioned in the subsequent steps, and the embodiment of the present disclosure is not limited thereto.
As shown in fig. 5, the contour information to be matched may be presented on the display screen of the terminal device together with the first contour information.
For example, after obtaining the contour information to be matched, the contour information to be matched may be matched with the first contour information. And under the condition that the matching degree of the contour information to be matched and the first contour information meets a preset matching condition, determining the distance between the target object and the terminal equipment as a first preset distance.
For example, in some examples, matching the contour information to be matched with the first contour information may refer to matching size information of the contour information to be matched with the first contour information, and satisfying the predetermined matching condition may refer to that a difference between the contour information to be matched and the size information included in the first contour information is smaller than a predetermined difference threshold. For example, the predetermined difference threshold may be determined empirically by one skilled in the art, as the case may be, and embodiments of the present disclosure are not limited in this respect. For example, as shown in fig. 5, in the case that the specific body part is a face, if the difference between the size of the face included in the contour information 402 to be matched and the size of the face included in the first contour information 302 is smaller than a predetermined difference threshold, the contour information 402 to be matched may be considered to be matched with the first contour information 302, otherwise, the contour information to be matched may be considered not to be matched with the first contour information.
For example, in other examples, matching the contour information to be matched with the first contour information may refer to calculating an average distance between two contours, and satisfying the predetermined matching condition may refer to the average distance between two contours being less than a certain predetermined distance threshold. For example, the predetermined distance threshold may be determined empirically by one skilled in the art, as the case may be, and embodiments of the present disclosure are not limited in this regard. For example, as shown in fig. 5, an average distance between the contour information 402 to be matched and the first contour information 302 is calculated, if the average distance is smaller than a certain predetermined distance threshold, the contour information 402 to be matched may be considered to be matched with the first contour information 302, otherwise, the contour information 402 to be matched is considered not to be matched with the first contour information 302.
For example, when the contour information to be matched is matched with the first contour information, the distance between the target object and the terminal device is considered as a first predetermined distance.
The detection method provided by the embodiment of the disclosure can obtain the profile information of each user corresponding to the predetermined distance, and determine whether the user is located at the predetermined distance by matching the profile information of each user corresponding to the predetermined distance with the current profile information of the user, so that the accuracy of the distance detection result can be improved, and the problem of inaccurate detection result caused by distance detection of different users by adopting a uniform profile is avoided.
For example, in some examples, the particular body part may be a face of the target object, and the first contour information is face contour information. For example, in other examples, the particular body part may be an iris of the target object, and the first contour information is iris contour information. For example, in other examples, the specific body part may also be other body parts such as hands, shoulders, and the like, and embodiments of the present disclosure are not limited in this regard.
For example, the detection method may be executed by a server connected to the terminal device, or may be executed by the terminal device. A user may use a terminal device to interact with a server over a network to receive or send messages, etc. Various client applications may be installed on the terminal device, such as a vision detection-type application, a game-type application, a shopping-type application, a web browser application, and/or social platform software, etc. (by way of example only). For example, the terminal device may be various electronic devices having a display screen and supporting web browsing. For example, the terminal device may include a learning machine, a mobile phone, a tablet computer, a display, a television, an electronic drawing board, and the like, and may also be other devices having a display function, which is not limited in this respect by the embodiments of the present disclosure.
The detection method can be applied to other scenes besides the scene of vision detection, for example, the detection method can be applied to interactive games needing to detect the distance of the user, and the like.
Fig. 6 is a schematic flowchart of acquiring a target image according to some embodiments of the present disclosure. As shown in FIG. 6, in at least one embodiment, acquiring the target image includes steps S31 and S32.
Step S31: and responding to the first detection request of the target object, and detecting the distance between the target object and the terminal equipment.
Step S32: and controlling the terminal equipment to acquire a target image when the distance between the target object and the terminal equipment is a first preset distance.
For example, when the target user performs detection for the first time, distance detection may be performed on the target user by using other means besides face detection, and when it is detected that the target user is located at a first predetermined distance, the target image is collected to obtain first contour information according to the target image. In the subsequent detection, the first contour information can be utilized to carry out distance detection on the target user.
For example, in some examples, the distance between the target object and the terminal device is detected based on a ranging apparatus. For example, the ranging device may include an infrared ranging device, a radar ranging device, and the like, and embodiments of the present disclosure are not limited thereto. For example, an infrared distance measuring device and/or a radar distance measuring device may be installed on the terminal device, and during the first detection, the distance measuring device may be used to detect whether the target user reaches a first predetermined distance from the terminal device, and then obtain a target image of the target user located at the first predetermined distance. Based on the method, the detection result of the distance measuring device has high accuracy, and further, the first contour information corresponding to the first preset distance can be obtained accurately.
For example, in other examples, in the process of detecting the distance between the target object and the terminal device, the target object holds a reference object, and the distance between the target object and the terminal device can be determined by object contour information of the reference object and the reference object.
For example, in this example, the detecting of the distance between the target object and the terminal device in step S31 may include: acquiring object contour information of a reference object (e.g., the object contour information corresponds to a first predetermined distance); acquiring a reference image (for example, the reference image is an image containing a reference object); determining contour information to be matched of a reference object based on the reference image; and determining the distance between the reference object and the terminal equipment as the distance between the target object and the terminal equipment based on the matching degree between the contour information to be matched of the reference object and the contour information of the object.
For example, the reference object may refer to an object having a standard size, and may include a matched remote controller, a4 paper, a certain type of mobile phone, and the like, for example, which is not limited by the embodiment of the disclosure. Since the reference object has a standard size, the contour information of the reference object corresponding to the first predetermined distance can be calculated according to the standard size, and for example, the contour size of the reference object in the image can be calculated according to the standard size of the contour of the reference object, the object distance (first predetermined distance), and the image distance of the camera as the object contour information. Since the object contour information is calculated based on the first predetermined distance, the object contour information corresponds to the first predetermined distance.
For example, during the first detection, the target user may hold the reference object to move back and forth, the reference object is flush with the body of the target user, and may be close to a specific body part of the target user, such as a face; of course, the body of the target user may be made flush with the reference object when the distance between the reference object and the terminal device is determined to be the first predetermined distance, which is not limited in this embodiment of the disclosure.
For example, in the process of moving the target user back and forth, a reference image including a reference object is acquired every predetermined time, the contour of the reference object is extracted from the reference image, contour information to be matched of the reference object is obtained, the contour information to be matched of the reference object is matched with previously calculated object contour information corresponding to a first predetermined distance, when the contour information to be matched of the reference object is matched with the object contour information, the target user is considered to be located at the first predetermined distance, and the matching mode of the first contour information and the contour information to be matched of a specific body part can be referred to, which is not repeated herein. The reference object with the standard size is used for first detection, so that the accuracy of detection can be guaranteed, first distance detection can be completed under the condition of no distance measuring device, and the problem that the first detection cannot be performed due to the fact that the distance measuring device is not installed on the terminal equipment is avoided.
For example, in other examples, because the size of the iris of each person is not very different, the iris may also be used for ranging to improve the accuracy of the detection result. In the process of carrying out the first detection, the iris ranging can be carried out by the following steps: acquiring an iris image of a target object; acquiring preset iris outline information, wherein the iris outline information corresponds to a first preset distance; and determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on preset iris outline information and the iris image. For example, a standard iris size may be preset and standard contour information of the standard iris at a first predetermined distance may be calculated, and when the current iris contour information of the user matches the standard contour information of the iris in the distance detection process, the user is considered to have reached the first predetermined distance.
For example, in some examples, the detection method may further include: and under the condition that the distance between the target object and the terminal equipment is smaller than a first preset distance, controlling the terminal equipment to output first prompt information to prompt the target object to move towards a direction far away from the terminal equipment so that the contour information to be matched is matched with the first contour information. And under the condition that the distance between the target object and the terminal equipment is greater than the first preset distance, controlling the terminal equipment to output second prompt information to prompt the target object to move towards the direction close to the terminal equipment so that the contour information to be matched is matched with the first contour information.
For example, in a case where the contour information to be matched of a specific body part is larger than the first contour information, it is considered that the distance between the target object and the terminal device is smaller than the first predetermined distance. In the case where the contour information to be matched of the specific body part is smaller than the first contour information, it is considered that the distance between the target object and the terminal device is larger than the first predetermined distance.
For example, the first prompt information and the second prompt information may be image prompt information, the first prompt information being, for example, a red image, and the second prompt information being, for example, a yellow image; alternatively, the first prompt message and the second prompt message may be voice prompt messages, the first prompt message is, for example, a voice broadcast message similar to "back", the second prompt message is, for example, a voice broadcast message similar to "forward", and of course, other types of prompt messages may also be used, which is not limited in this embodiment of the present disclosure. Based on the mode, the user can quickly and accurately reach the position which is at the first preset distance from the terminal equipment.
For example, in a case where the distance between the target object and the terminal device is equal to the first predetermined distance, the terminal device may be further controlled to output the third prompt information. For example, the third cue information may be an image cue information, such as a green image; alternatively, the third prompt message may be a voice prompt message, for example, a voice broadcast message similar to "arrival", and of course, the third prompt message may also be other types of prompt messages, which is not limited in this embodiment of the disclosure.
At least one embodiment of the present disclosure further provides a vision testing method, including: determining whether the distance between the target object and the terminal equipment is a first preset distance by using the detection method of any embodiment of the present disclosure; under the condition that the distance between the target object and the terminal equipment is a first preset distance, controlling the terminal equipment to display a first vision detection image corresponding to the first preset distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing a vision value; and performing vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
Fig. 7 is a flowchart illustrating a vision testing method according to some embodiments of the present disclosure. As shown in fig. 7, in at least one embodiment, the method may include steps S41-S42 in addition to S10 and S20 of the detection method of the above embodiment.
Step S41: and under the condition that the distance between the target object and the terminal equipment is a first preset distance, controlling the terminal equipment to display a first vision detection image corresponding to the first preset distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing a vision value.
Step S42: and performing vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
For example, controlling the terminal device to display a first vision inspection image corresponding to a first predetermined distance includes: determining a display size of each of a plurality of groups of sight icons based on the first predetermined distance, e.g., the plurality of groups of sight icons respectively representing a plurality of sight values; determining an orientation of each sight icon, e.g., the orientation of each sight icon is determined randomly; based on the display size and orientation of each vision icon, a first vision inspection image is generated.
For example, the first optometry image may be an optotype image, the optotype (hereinafter also simply referred to as "optotype") may be, for example, an "E" icon, and the size of the optotype icon is calculated based on a first predetermined distance and information such as a physical width and height of a screen display portion, a screen resolution, and the like so that the optotype displayed on the display screen corresponds to the optotype image at the first predetermined distance. For example, the sets of sight icons may correspond to rows of icons in the eye chart, respectively, with different rows of sight icons being different sizes to indicate different values of sight. Randomly orienting each sight icon may avoid the problem of cheating by the user remembering the eye chart.
For example, the first visual acuity test image may be an eye chart consisting of a plurality of rows of visual acuity icons, or may be an image including a single visual acuity icon, and when one visual acuity icon passes the test, another visual acuity icon is displayed. Alternatively, the first vision test image may be an image including a single row of sight icons, and after one row of sight icons passes the test, another row of sight icons may be displayed.
For example, the type of the sighting target can be standard E sighting target, children cartoon sighting target, C sighting target and the like, and different types of sighting targets can be switched according to user selection.
For example, when the first vision inspection image is presented, the terminal device may be controlled to increase the brightness of the display screen thereof, for example, the maximum brightness of the display screen may be adjusted, so that a target user with a certain distance can see the vision icon clearly. When the visual chart is quitted, the display screen of the terminal equipment can be restored to the original brightness.
For example, a plurality of brightness adjustment modes may also be provided: (1) automatically adjusting the brightness of the display screen according to the ambient light; (2) the user manually sets the brightness; (3) automatically adjusted to a user pre-configured desired brightness. One of the brightness adjustment modes can be selected by the target user. In addition, the minimum screen brightness can be set by the user.
Fig. 8A is a schematic illustration of a partial-vision icon provided by some embodiments of the present disclosure. As shown in fig. 8A, the currently detected eyesight icon may be indicated by an arrow, and information such as the eyesight value corresponding to the current eyesight icon, the number of detected eyesight icons of the target user, and the accuracy may also be displayed on the terminal device.
For example, the flow of vision testing may be as follows: when the target user performs the first visual acuity test, the test may be started from the visual target corresponding to the visual acuity value 4.0, and when the first visual target of the line is correctly detected, the line may be directly skipped to the next line until the first visual target of a certain line is wrongly answered or the target user indicates that the visual target is not clearly seen, the line may be mainly detected, for example, a plurality of visual targets may be continuously detected in the line, if the visual target answer pairs exceeding 1/2 in the plurality of visual targets are determined, the line may be considered to pass the test, and the line may be continuously skipped to the next line until the last line (corresponding to the visual acuity value 5.3) is reached, and when the visual targets above 1/2 are all paired, the line may be skipped to be detected, and the detection result is 5.3. And when the first visual target of a certain row is wrongly answered or the target user shows that the visual target is not clear, and the correctness of a plurality of visual targets detected for the row is lower than 1/2, jumping up one row until the correctness of the jumping-up certain row exceeds 1/2, and recording the vision value corresponding to the visual target of the last detected row as the vision value of the target user. If the target user cannot see the visual target corresponding to the visual power value 4.0, the target user can jump up to the lowest visual power value of 3.7, and if the visual target in the 3.7 rows is not seen clearly, the visual power value of the target user is recorded to be less than 3.7.
For example, after detecting the vision value of the target user, the vision value of the target user may be recorded, and may be stored in the terminal device, or may be uploaded and stored in the server.
For example, after the vision value of the target user is detected, a vision report may be generated, and the vision report may include user information, a current detected vision value, whether the vision is normal, related medical advice, a vision value trend graph, and the like. The vision report can be temporarily stored to the local terminal device when the terminal device is offline, the vision report can be synchronized to the cloud server after the terminal device is networked, data can be downloaded to the local terminal device from the cloud server, and the local data and the server data can be combined. The target user can check the current vision report and the historical vision report on the terminal equipment, and also can generate a two-dimensional code according to the ID of the target user, the vision report corresponding to the target user can be checked by scanning the two-dimensional code, and the vision data can also be checked through other terminal equipment communicated with the cloud server.
For example, the target user may be periodically pushed with information, the pushed information may include, for example, information that prompts the user to periodically detect eyesight, and if the target user does not detect eyesight within a predetermined time period (e.g., 1 month), the user may be prompted to detect eyesight again.
Fig. 8B is a schematic diagram of a user interface provided by some embodiments of the present disclosure. As shown in fig. 8B, each target user may have its own tag. Several labels can be preset, for example, 4 labels of a father, a mother, a boy and a girl are preset, the name of each label can be customized, for example, the labels of the father, the mother, the boy and the girl are respectively defined as "Baoda", "Baoma", "Tianchou", "Dameiniu", and in addition, the label of a visitor can be added. Each user can input information such as name, age, face and the like to register, and after the age of the child is input, the grade of the child can be automatically calculated. When the number of users is greater than or equal to a predetermined number (e.g., 6), a "more" option is displayed, and a previously undisplayed person may be displayed within the "more" option. The user rankings may be in descending order of the tested users' test times, i.e., the most recently detected user is ranked first. Alternatively, a preset number of labels may be set on top. The selection of one of the users may be followed by an initial test, for example, a distance test followed by a vision test. Or, the user can firstly carry out detection, and after the vision test is finished, the user is prompted to enter user information when the vision report is checked.
For example, each user can be bound with a terminal device, each user ID is unique and not repeated, the terminal device can upload user information to a server after networking, and operations such as editing and deleting the user information can also be performed.
For example, in some examples, visually inspecting the target object based on the vision inspection image may include: acquiring the vision state of a target object, wherein the vision state comprises a historical vision detection result and/or vision data input by the target object; and controlling the terminal equipment to start detection from the vision icon group corresponding to the vision state.
For example, if the current vision test of the target user is not the first vision test, the vision value determined by the last vision test of the target user may be obtained, and the test may be started from the visual target line corresponding to the vision value. For example, if the target user has detected a vision value of 4.8 last time, the detection may be started from the row with the vision value of 4.8 when the current vision detection is performed, and the detection is not required to be started from 4.0. Based on the mode, the detection process can be simplified, and the target user can know that the vision of the target user is unchanged, improved or weakened compared with the vision of the target user in the last detection.
For example, the target user may be caused to enter his or her current vision condition, such as whether it is near-sighted, the number of degrees of glasses worn, whether it is amblyopic, the value of vision detected at the fitting or physical examination mechanism, and the like. When the target user performs vision detection, detection may be started from a vision value corresponding to the vision condition input by the target user, for example, if the target user has myopia and has higher power or amblyopia, detection may be started from a lower vision value, for example, from a vision value of 3.7.
For example, in other examples, vision testing the target object based on the vision testing image includes: acquiring input information of a target object, for example, the input information includes at least one of voice information, gesture information, eyeball image information, head image information, and remote control information, and of course, other information is also possible, which is not limited in this embodiment of the disclosure; determining the orientation of the sight icon judged by the target object based on the input information; and determining the vision detection result of the target object based on the judged orientation of the target object.
For example, in some examples, words or phrases such as "up", "down", "left", "right", and the like, which are input by the user through voice, may be received, and the orientation of the visual target determined by the user may be obtained through a voice recognition algorithm; and the words such as 'unclear seeing', 'unknown', 'uncertain' and the like input by the voice of the user can be received, and the current sighting mark is judged to be unclear for the user. In addition, different lines may be switched by a voice such as "skip one line up", "skip one line down", or another visual target in a current line may be switched by a voice such as "change one line", which is not limited in this embodiment of the disclosure.
For example, in some examples, the user may indicate four directions, i.e., up, down, left, and right, by capturing a gesture image and using an image recognition algorithm, the orientation of the visual target determined by the user may be obtained, and when the target user is not in sight, the gesture may be represented by a predetermined gesture.
For example, in some examples, the user may indicate four directions of up, down, left, and right by rotating the eyeball, and when it is detected that the eyeball of the target user is directly in front of the eyeball for more than a predetermined period of time (e.g., 3 seconds) or it is detected that the target user performs an eye-closing action, it may be determined that the user does not see the visual target clearly.
For example, in some examples, the target user may indicate the direction of the optotype by turning the head.
For example, in some examples, the target user may press the up, down, left, right keys of the remote control and other designated keys (e.g., "ok" keys) to represent four directional and obscured instructions, respectively. Based on the mode, the detection mode can be more flexible, and a user can select the detection mode according to the preference, so that the user experience is improved.
It should be noted that other ways of representing the four directions and obscured instructions may be used, respectively, and embodiments of the present disclosure are not limited in this respect.
For example, in other examples, visually inspecting the target object based on the vision inspection image may include: acquiring a face image of a target object in a vision detection process; determining an eye shielding state of the target object based on the face image; and under the condition that the eye shielding state does not meet the preset shielding requirement, controlling the terminal equipment to output prompt information and/or suspend vision detection.
For example, during the process of vision detection of the target user, whether the target user blocks one eye can be monitored in real time. The predetermined occlusion requirement may, for example, comprise occluding one eye and further may comprise occluding the other eye outside the currently detected eye. For example, during vision detection, a face image of a target user may be taken every predetermined time (e.g., 1 second), eyes of the target user are identified from the face image according to an image identification algorithm, if both eyes of the target user can be identified, the target user is considered not to block one eye, and the predetermined blocking requirement is not met, so that the terminal device may be caused to perform voice or a display screen may be caused to output prompt information to prompt the user to block the eyes. If only one eye can be identified, whether the identified eye is the eye currently detected can be further judged, if not, the identified eye does not meet the preset shielding requirement, and the terminal equipment can be made to perform voice or the display screen can be made to output prompt information to prompt the user to shield the other eye; or, under the condition that the eye shielding state does not meet the preset shielding requirement, the vision detection can be suspended until the eye shielding state is detected to meet the preset shielding requirement, and then the detection is continued. In this way, target users can be prevented from cheating to obtain more accurate vision test results.
For example, during the vision detection, it may also be monitored whether the distance between the target user and the terminal device is maintained at the first predetermined distance, for example, the distance may be monitored by matching the contour of a specific body part, or the distance may be monitored using a distance measuring device. When the movement position of the user is monitored, the user can be prompted to return to the first preset distance, or the vision detection is suspended until the target user returns to the first preset distance, and then the detection is continued.
For example, the detection method may further include: when the detection distance included in the detection request is switched from a first predetermined distance to a second predetermined distance, determining second contour information corresponding to the second predetermined distance based on the first contour information; determining whether the distance between the target object and the terminal equipment is a second preset distance or not based on the second contour information; and controlling the terminal device to display a second vision detection image corresponding to a second predetermined distance to perform vision detection on the target object based on the second vision detection image in the case that the distance between the target object and the terminal device is the second predetermined distance.
For example, a plurality of predetermined distances may be preset, and for a scene of vision detection, the plurality of predetermined distances may include, for example, 1m, 2m, 3m, and 5m, and one of the predetermined distances may be selected as the detection distance by the target user according to the size of the field where the target user is located. If the target user does not select, one of the predetermined distances is taken as a default distance (e.g., 3m), for example, the first predetermined distance may be the default distance. When the target user wants to switch from the first predetermined distance to the second predetermined distance, the first contour information may be scaled up or down, when the second predetermined distance is smaller than the first predetermined distance, the first contour information may be scaled up or down, when the second predetermined distance is greater than the first predetermined distance, so that the second contour information obtained by scaling up or down the first contour information corresponds to the second predetermined distance. And performing distance detection on the target user by using the second contour information, and when the contour information to be matched of the target user is matched with the second contour information, determining that the target user is located at a second preset distance.
For example, after switching to the second predetermined distance, before performing vision testing, it is necessary to scale the size of the sight icon equally, when the second predetermined distance is smaller than the first predetermined distance, to scale down the sight icon equally, and when the second predetermined distance is larger than the first predetermined distance, to scale up the sight icon equally so that the scaled sight icon size corresponds to the second predetermined distance. Based on the mode, the target user can switch the detection distance to adapt to factors such as different site environments and the like.
For example, in some examples, the detection method may further include: acquiring eye use behavior information of a target object during the use of the terminal equipment by the target object; and outputting the prompt information under the condition that the eye using behavior information meets the preset eye using condition.
For example, the eye use behavior information may include behavior information such as a time period for the target user to use the terminal device, a distance between the target user and the display screen, and the predetermined eye use condition may include, for example, a condition that the time period for use exceeds a predetermined time period threshold (e.g., 2 hours), and a distance between the target user and the display screen is less than a predetermined distance threshold (e.g., 30cm), and other conditions may also be included, which is not limited by the embodiment of the present disclosure. When the predetermined eye condition is reached, the target user is prompted to rest or is prompted to move away from the display screen.
It should be noted that, in the embodiments of the present disclosure, the flows of the detection methods provided by the above-mentioned embodiments of the present disclosure may include more or less operations, and these operations may be executed sequentially or in parallel. Although the flow of the detection method described above includes a plurality of operations occurring in a particular order, it should be clearly understood that the order of the plurality of operations is not limited. The detection method described above may be performed once or may be performed a plurality of times according to a predetermined condition.
Fig. 9 is a system that may be used to implement the detection method and/or the vision detection method provided by embodiments of the present disclosure. As shown in fig. 9, the system 10 may include, for example, a user terminal 11, a network 12, a server 13, and a database 14. For example, the system 10 may be used to implement the detection methods and/or vision detection methods provided by any of the embodiments of the present disclosure.
The user terminal 11, i.e. the terminal device in the above embodiments of the present disclosure, is, for example, a computer 11-1 or a mobile phone 11-2. It is understood that the user terminal 11 may be any other type of electronic device capable of performing data processing, which may include, but is not limited to, a desktop computer, a laptop computer, a tablet computer, a smart phone, a smart home device, a wearable device, a vehicle-mounted electronic device, a monitoring device, and the like. The user terminal may also be any equipment provided with an electronic device, such as a vehicle, a robot, etc. Embodiments of the present disclosure do not limit the hardware configuration or the software configuration (e.g., the type (e.g., Windows, MacOS, etc.) or version of the operating system) of the user terminal.
The user may operate an application installed on the user terminal 11 or a website logged in on the user terminal 11, the application or the website transmits user behavior data to the server 13 through the network 12, and the user terminal 11 may also receive data transmitted by the server 13 through the network 12. The user terminal 11 may implement the detection method and/or the eyesight detection method provided by the embodiment of the present disclosure by running a sub program or a sub thread.
For example, in some embodiments, the detection method provided by the embodiments of the present disclosure may be performed by a processing unit of the user terminal 11. In some implementations, the user terminal 11 may perform the detection method using an application built in the user terminal 11. In other implementations, the user terminal 11 may execute the detection method and/or the vision detection method provided by at least one embodiment of the present disclosure by calling an application program stored outside the user terminal 11.
In other embodiments, the user terminal 11 transmits the acquired image to the server 13 via the network 12, and the server 13 performs the detection method. In some implementations, the server 13 may perform the detection method using an application built into the server. In other implementations, the server 13 may perform the detection method and/or the vision detection method by invoking an application stored externally to the server 13.
The network 12 may be a single network or a combination of at least two different networks. For example, the network 12 may include, but is not limited to, one or a combination of local area networks, wide area networks, public networks, private networks, and the like.
The server 13 may be a single server or a group of servers, for example, a cluster of internet of things servers, and the servers in the group are connected via a wired or wireless network. A group of servers may be centralized, such as a data center, or distributed. The server 13 may be local or remote.
The database 14 may generally refer to a device having a storage function. The database 13 is mainly used to store various data utilized, generated, and outputted from the user terminal 11 and the server 13 in operation. The database 14 may be local or remote. The database 14 may include various memories such as a Random Access Memory (RAM), a Read Only Memory (ROM), and the like. The above mentioned storage devices are only examples and the storage devices that the system can use are not limited to these.
The database 14 may be interconnected or in communication with the server 13 or a portion thereof via the network 12, or directly interconnected or in communication with the server 13, or a combination thereof.
In some embodiments, the database 15 may be a stand-alone device. In other embodiments, the database 15 may also be integrated in at least one of the user terminal 11 and the server 14. For example, the database 15 may be provided on the user terminal 11 or may be provided on the server 14. For another example, the database 15 may be distributed, and a part thereof may be provided in the user terminal 11 and another part thereof may be provided in the server 14.
For example, the model database may be deployed on the database 14. When the acquired scene data needs to be acquired, the user terminal 11 accesses the database 14 through the network 12, and acquires the scene data stored in the database 14 through the network 12. The embodiment of the present disclosure does not limit the type of the database, and may be, for example, a relational database or a non-relational database.
At least one embodiment of the present disclosure further provides a detection apparatus, where the apparatus may obtain profile information of each user corresponding to a predetermined distance, and determine whether the user is located at the predetermined distance by matching the profile information of each user corresponding to the predetermined distance with current profile information of the user, so as to improve accuracy of a distance detection result and avoid a problem that the detection result is inaccurate due to distance detection of different users by using a uniform profile.
Fig. 10 is a schematic block diagram of a detection apparatus according to some embodiments of the present disclosure. As shown in fig. 10, the detection apparatus 100a includes a contour acquisition unit 110 and a contour matching unit 120. For example, the detection apparatus 100a may be applied to a user terminal or a server communicating with the user terminal, and the embodiment of the disclosure is not limited thereto. For example, the units/modules in the detection apparatus may be implemented by any combination of hardware, software, or firmware, and the following embodiments are the same and will not be described again.
The contour acquisition unit 110 is configured to acquire contour information to be matched of a specific body part of a target object in response to a detection request of the target object. For example, the contour acquisition unit 110 may perform step S10 of the detection method shown in fig. 1. For specific description, reference may be made to the description of step S10, which is not described herein again.
The contour matching unit 120 is configured to determine whether the distance between the target object and the terminal device is a first predetermined distance based on the degree of matching between the contour information to be matched and first contour information of a specific body part of the target object, for example, the first contour information is determined based on the target image corresponding to the first predetermined distance, and the target image is an image containing the specific body part of the target object acquired in a case where the distance between the target object and the terminal device is the first predetermined distance. For example, the contour matching unit 120 may perform step S20 of the detection method as shown in fig. 1. For specific description, reference may be made to the description of step S20, which is not described herein again.
It should be noted that, in the embodiment of the present disclosure, each unit of the detection apparatus 100a corresponds to each step of the detection method, and for the specific function of the detection apparatus 100a, reference may be made to the description related to the detection method above, and details are not repeated here. The components and configuration of the detection device 100a shown in FIG. 10 are exemplary only, and not limiting, and the detection device 100a may also include other components and configurations as desired.
For example, in some examples, the detection apparatus 100a may further include an image acquisition unit configured to acquire a target image, for example, acquiring the target image may include: responding to a first detection request of a target object, and detecting the distance between the target object and the terminal equipment; and controlling the terminal equipment to acquire the target image when the distance between the target object and the terminal equipment is a first preset distance.
For example, in some examples, the detection apparatus 100a may further include a first prompting unit configured to control the terminal device to output first prompting information to prompt the target object to move away from the terminal device to match the contour information to be matched with the first contour information, in a case where a distance between the target object and the terminal device is smaller than a first predetermined distance; and under the condition that the distance between the target object and the terminal equipment is larger than the first preset distance, controlling the terminal equipment to output second prompt information to prompt the target object to move towards the direction close to the terminal equipment so that the contour information to be matched is matched with the first contour information.
For example, in some examples, the detection apparatus 100a may further include a switching unit configured to determine, based on the first profile information, second profile information corresponding to a second predetermined distance in a case where the detection distance included in the detection request is switched from the first predetermined distance to the second predetermined distance; determining whether the distance between the target object and the terminal equipment is a second preset distance or not based on the second contour information; and controlling the terminal device to display a second vision detection image corresponding to a second predetermined distance to perform vision detection on the target object based on the second vision detection image, in a case where the distance between the target object and the terminal device is the second predetermined distance.
It should be noted that, for clarity and conciseness, not all the constituent elements of the detecting device 100a are shown in the embodiments of the present disclosure. To achieve the necessary functions of the detecting device 100a, those skilled in the art may provide and arrange other components not shown according to specific needs, and the embodiment of the disclosure is not limited thereto.
For the related description and technical effects of the detection apparatus 100a, reference may be made to the related description and technical effects of the detection method provided in the embodiments of the present disclosure, which are not repeated herein.
At least one embodiment of the present disclosure also provides a vision inspection apparatus including: the detection device in the above embodiment; the control unit is configured to control the terminal device to display a first vision detection image corresponding to a first preset distance under the condition that the distance between the target object and the terminal device is the first preset distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing one vision value; and a vision detection unit configured to perform vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
Fig. 11 is a schematic block diagram of a vision testing apparatus provided in some embodiments of the present disclosure. As shown in fig. 11, the vision inspection apparatus 100b includes the contour acquisition unit 110 and the contour matching unit 120 in the inspection apparatus 100a described above, and may further include a control unit 130 and a vision inspection unit 140.
The control unit 130 is configured to control the terminal device to display a first vision inspection image corresponding to a first predetermined distance, for example, the first vision inspection image including at least one vision icon, in a case where the distance between the target object and the terminal device is the first predetermined distance. For example, the control unit 130 may perform step S41 of the detection method as shown in fig. 7. For specific description, reference may be made to the description of step S41, which is not described herein again.
The vision detecting unit 140 is configured to perform vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object. For example, the vision detecting unit 140 may perform step S42 of the detecting method shown in fig. 7. For specific description, reference may be made to the description of step S42, which is not described herein again.
It should be noted that, in the embodiment of the present disclosure, each unit of the vision detecting apparatus 100b corresponds to each step of the foregoing detecting method, and for the specific function of the vision detecting apparatus 100b, reference may be made to the description related to the detecting method above, and details are not repeated here. The components and configuration of the vision testing device 100b shown in fig. 11 are exemplary only, and not limiting, and the vision testing device 100b may include other components and configurations as desired.
For example, in some examples, the vision detecting apparatus 100b may further include a second prompting unit configured to acquire eye use behavior information of the target object during use of the terminal device by the target object; and outputting the prompt information under the condition that the eye using behavior information meets the preset eye using condition.
It should be noted that, for clarity and conciseness, not all the constituent elements of the vision testing apparatus 100b are shown in the embodiments of the present disclosure. To realize the necessary functions of the vision detecting apparatus 100b, those skilled in the art may provide and arrange other components not shown according to specific needs, and the embodiment of the present disclosure is not limited thereto.
For the related description and technical effects of the vision testing apparatus 100b, reference may be made to the related description and technical effects of the vision testing method provided in the embodiments of the present disclosure, and details are not repeated here.
Fig. 12 is a schematic block diagram of a detection apparatus according to some embodiments of the present disclosure. As shown in fig. 12, the detection apparatus 200 includes a processor 210 and a memory 220. Memory 220 is used to store non-transitory computer readable instructions (e.g., one or more computer program modules). The processor 210 is configured to execute non-transitory computer readable instructions, which when executed by the processor 210 may perform one or more of the detection methods or vision detection methods described above. The memory 220 and the processor 210 may be interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the processor 210 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or other form of processing unit having data processing capabilities and/or program execution capabilities, such as a Field Programmable Gate Array (FPGA), or the like; for example, the Central Processing Unit (CPU) may be an X86 or ARM architecture or the like. The processor 210 may be a general-purpose processor or a special-purpose processor that may control other components in the electronic device 200 to perform desired functions.
For example, memory 220 may include any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer program modules may be stored on the computer-readable storage medium and executed by the processor 210 to implement the various functions of the detection apparatus 200. Various applications and various data, as well as various data used and/or generated by the applications, and the like, may also be stored in the computer-readable storage medium.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the above description on the detection method or the vision detection method for specific functions and technical effects of the detection apparatus 200, and details are not described herein again.
At least one embodiment of the present disclosure also provides an electronic device. Fig. 13 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. As shown in fig. 13, the electronic device 300 includes the detection apparatus 310 and/or the vision detection apparatus 320 provided in any of the above embodiments, and a terminal device 330, for example, the terminal device 330 is configured to display the contour information to be matched and the first contour information. For example, in other examples, the terminal device is further configured to display a vision test image or the like, which is not limited by embodiments of the present disclosure. For example, the detecting device 310 may be the detecting device 100a shown in fig. 10 or the detecting device 200 shown in fig. 12, and the visual acuity detecting device 320 may be the visual acuity detecting device 100b shown in fig. 11 or the detecting device 200 shown in fig. 12.
It should be noted that, in the embodiment of the present disclosure, reference may be made to the above description on the detection apparatus or the detection method, the vision detection apparatus or the vision detection method for specific functions and technical effects of the electronic device 300, and details are not described herein again.
Fig. 14 is a schematic block diagram of another electronic device provided by some embodiments of the present disclosure. The electronic device 400 is, for example, suitable for implementing the detection method and/or the vision detection method provided by the embodiments of the present disclosure. The electronic device 400 may be a user terminal or the like. It should be noted that the electronic device 400 shown in fig. 14 is only one example, and does not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 14, electronic device 400 may include a processing means (e.g., central processing unit, graphics processor, etc.) 410 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)420 or a program loaded from a storage device 480 into a Random Access Memory (RAM) 430. In the RAM 430, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 410, the ROM 420, and the RAM 430 are connected to each other by a bus 440. An input/output (I/O) interface 450 is also connected to bus 440.
Generally, the following devices may be connected to the I/O interface 450: input devices 460 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 470 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; storage 480 including, for example, magnetic tape, hard disk, etc.; and a communication device 490. The communication device 490 may allow the electronic device 400 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 14 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided, and that the electronic device 400 may alternatively be implemented or provided with more or less means.
For example, the detection method and/or the vision detection method according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program comprising program code for performing the above-described detection method and/or vision detection method. In such embodiments, the computer program may be downloaded and installed from a network through communication device 490, or installed from storage device 480, or installed from ROM 420. When executed by the processing device 410, the computer program may perform the functions defined in the detection method and/or the vision detection method provided by the embodiments of the present disclosure.
It should be noted that the computer readable medium described above in this disclosure can be a computer readable signal medium or a non-transitory computer readable storage medium or any combination of the two. The non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the non-transitory computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a non-transitory computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as the hypertext Transfer Protocol (HTTP), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to a detection request of a target object, and acquiring contour information to be matched of a specific body part of the target object; and determining whether the distance between the target object and the terminal device is a first predetermined distance or not based on the matching degree of the contour information to be matched and first contour information of the specific body part of the target object, wherein the first contour information is determined based on the target image and corresponds to the first predetermined distance, and the target image is an image which is acquired under the condition that the distance between the target object and the terminal device is the first predetermined distance and contains the specific body part of the target object.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
At least one embodiment of the present disclosure also provides a storage medium for storing non-transitory computer-readable instructions, which when executed by a computer, can implement the detection method or the vision detection method according to any embodiment of the present disclosure.
Fig. 15 is a schematic diagram of a storage medium according to some embodiments of the present disclosure. As shown in fig. 15, the storage medium 500 is used to store non-transitory computer readable instructions 510. For example, the non-transitory computer readable instructions 510, when executed by a computer, may perform one or more steps according to the detection method or the vision detection method described above.
For example, the storage medium may be any combination of one or more computer-readable storage media, such as one containing computer-readable program code for acquiring contour information to be matched of a specific body part of a target object in response to a detection request of the target object, and another containing computer-readable program code for determining whether a distance between the target object and a terminal device is a first predetermined distance based on a degree of matching of the contour information to be matched and first contour information of the specific body part of the target object. For example, when the program code is read by a computer, the computer may execute the program code stored in the computer storage medium, perform a detection method or a vision detection method provided by any of the embodiments of the present disclosure, for example.
For example, the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a flash memory, or any combination of the above, as well as other suitable storage media.
For example, the storage medium 500 may be applied to the electronic device 400 described above. The storage medium 500 may be, for example, the memory 420 in the electronic device 400 shown in fig. 14. For example, the related description about the storage medium 500 may refer to the corresponding description of the memory 420 in the electronic device 400 shown in fig. 14, and is not repeated here.
The detection method, the detection apparatus, the vision detection method, the vision detection apparatus, the electronic device, and the storage medium provided by the embodiments of the present disclosure are described with reference to fig. 1 to 15. The detection method provided by the embodiment of the disclosure can be used for detecting whether the distance between the target user and the terminal device is the preset distance, obtaining the profile information of each user corresponding to the preset distance, and determining whether the user is located at the preset distance by matching the profile information of each user corresponding to the preset distance with the current profile information of the user, so that the accuracy of the distance detection result can be improved, and the problem of inaccurate detection result caused by distance detection of different users by adopting a uniform profile is avoided.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the present disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the embodiments of the disclosure and is provided for the purpose of illustrating the general principles of the technology. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (21)

1. A method of detection, comprising:
responding to a detection request of a target object, and acquiring contour information to be matched of a specific body part of the target object; and
determining whether the distance between the target object and the terminal equipment is a first preset distance or not based on the matching degree of the contour information to be matched and first contour information of a specific body part of the target object;
wherein the first contour information is determined based on a target image corresponding to the first predetermined distance, and the target image is an image of a specific body part including the target object, which is acquired when the distance between the target object and the terminal device is the first predetermined distance.
2. The detection method of claim 1, further comprising:
the target image is acquired and the target image is displayed,
wherein acquiring the target image comprises:
responding to a first detection request of the target object, and detecting the distance between the target object and the terminal equipment; and
and controlling the terminal equipment to acquire the target image when the distance between the target object and the terminal equipment is the first preset distance.
3. The detection method according to claim 2, wherein the distance between the target object and the terminal device is detected based on a ranging device.
4. The detection method according to claim 2, wherein the target object holds a reference object in detecting the distance between the target object and the terminal device;
detecting the distance between the target object and the terminal equipment, comprising:
acquiring object contour information of the reference object, wherein the object contour information corresponds to the first predetermined distance;
acquiring a reference image, wherein the reference image is an image containing the reference object;
determining profile information to be matched of the reference object based on the reference image; and
and determining the distance between the reference object and the terminal equipment as the distance between the target object and the terminal equipment based on the matching degree between the contour information to be matched of the reference object and the object contour information.
5. The detection method according to any one of claims 1 to 4, further comprising:
and under the condition that the matching degree of the contour information to be matched and the first contour information meets a preset matching condition, determining the distance between the target object and the terminal equipment as the first preset distance.
6. The detection method according to any one of claims 1 to 4, wherein acquiring contour information to be matched of a specific body part of the target object comprises:
acquiring an image to be matched containing a specific body part of the target object;
and determining the contour information to be matched of the specific body part based on the image to be matched.
7. The detection method according to any one of claims 1 to 4, further comprising:
under the condition that the distance between the target object and the terminal equipment is smaller than the first preset distance, controlling the terminal equipment to output first prompt information to prompt the target object to move towards the direction far away from the terminal equipment so that the contour information to be matched is matched with the first contour information; and/or the presence of a gas in the gas,
and under the condition that the distance between the target object and the terminal equipment is greater than the first preset distance, controlling the terminal equipment to output second prompt information to prompt the target object to move towards the direction close to the terminal equipment so that the contour information to be matched is matched with the first contour information.
8. A method of vision testing, comprising:
determining whether the distance between the target object and the terminal device is the first predetermined distance by using the detection method according to any one of claims 1 to 7;
under the condition that the distance between the target object and the terminal device is the first preset distance, controlling the terminal device to display a first vision detection image corresponding to the first preset distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing a vision value; and
and performing vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
9. The vision inspection method of claim 8, wherein controlling the terminal device to display a first vision inspection image corresponding to the first predetermined distance comprises:
determining a display size of each sight icon in each of the sight icon groups based on the first predetermined distance;
determining the orientation of each sight icon, wherein the orientation of each sight icon is determined randomly; and
generating the first vision inspection image based on the display size and orientation of each vision icon.
10. The vision testing method of claim 8, wherein visually testing the target subject based on the vision test image comprises:
acquiring input information of the target object, wherein the input information comprises at least one of voice information, gesture information, eyeball image information, head image information and remote control information;
determining an orientation of the sight icon judged by the target object based on the input information; and
determining a vision detection result of the target object based on the determined orientation of the target object.
11. The vision testing method of claim 8, wherein visually testing the target subject based on the vision test image comprises:
acquiring the vision state of the target object, wherein the vision state comprises a historical vision detection result and/or vision data input by the target object; and
and controlling the terminal equipment to start detection from the vision icon group corresponding to the vision state.
12. The vision testing method of claim 8, wherein visually testing the target subject based on the vision test image comprises:
acquiring a face image of the target object in a vision detection process;
determining an eye occlusion state of the target object based on the facial image; and
and under the condition that the eye shielding state does not meet the preset shielding requirement, controlling the terminal equipment to output prompt information and/or suspend vision detection.
13. The vision testing method of any one of claims 1-4, further comprising:
when the detection distance included in the detection request is switched from the first predetermined distance to a second predetermined distance, determining second contour information corresponding to the second predetermined distance based on the first contour information;
determining whether the distance between the target object and the terminal equipment is the second preset distance or not based on the second contour information; and
and controlling the terminal device to display a second vision detection image corresponding to the second predetermined distance to perform vision detection on the target object based on the second vision detection image if the distance between the target object and the terminal device is the second predetermined distance.
14. The vision testing method of any one of claims 1-4, further comprising:
acquiring eye use behavior information of the target object during the period that the target object uses the terminal equipment; and
and outputting prompt information when the eye using behavior information meets a preset eye using condition.
15. The vision testing method of any one of claims 1-4,
the specific body part is the face of the target object, and the first contour information is face contour information; or
The specific body part is an iris of the target object, and the first contour information is iris contour information.
16. The vision testing method of any one of claims 1-4,
the terminal device comprises a display unit configured to display the contour information to be matched and the first contour information.
17. A detection device, comprising:
a contour acquisition unit configured to acquire contour information to be matched of a specific body part of a target object in response to a detection request of the target object; and
a contour matching unit configured to determine whether a distance between the target object and a terminal device is a first predetermined distance based on a matching degree of the contour information to be matched and first contour information of a specific body part of the target object, wherein the first contour information is determined based on a target image and corresponds to the first predetermined distance, and the target image is an image including the specific body part of the target object, which is acquired when the distance between the target object and the terminal device is the first predetermined distance.
18. A vision testing device comprising: the detection device, the control unit and the vision detection unit according to claim 17; wherein the content of the first and second substances,
the control unit is configured to control the terminal device to display a first vision detection image corresponding to the first predetermined distance when the distance between the target object and the terminal device is the first predetermined distance, wherein the first vision detection image comprises at least one vision icon group, each vision icon group comprises at least one vision icon, and each vision icon group is used for representing one vision value; and
the vision detection unit is configured to perform vision detection on the target object based on the first vision detection image to obtain a vision detection result of the target object.
19. A detection device, comprising:
a processor;
a memory including one or more computer program modules;
wherein the one or more computer program modules are stored in the memory and configured to be executed by the processor, the one or more computer program modules comprising instructions for implementing the detection method of any one of claims 1-7 or the vision detection method of any one of claims 8-16.
20. An electronic device comprising the detection apparatus and/or vision detection apparatus of any one of claims 17-19, and the terminal device;
the terminal equipment is configured to display the contour information to be matched and the first contour information.
21. A storage medium storing non-transitory computer readable instructions which, when executed by a computer, implement the method of testing of any one of claims 1-7 or the method of vision testing of any one of claims 8-16.
CN202110473883.8A 2021-04-29 2021-04-29 Detection method, vision detection method, device, electronic equipment and storage medium Pending CN113509136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110473883.8A CN113509136A (en) 2021-04-29 2021-04-29 Detection method, vision detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110473883.8A CN113509136A (en) 2021-04-29 2021-04-29 Detection method, vision detection method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113509136A true CN113509136A (en) 2021-10-19

Family

ID=78063604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110473883.8A Pending CN113509136A (en) 2021-04-29 2021-04-29 Detection method, vision detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113509136A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421135A (en) * 2023-04-27 2023-07-14 北京京东拓先科技有限公司 Vision testing method, device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662334A (en) * 2012-04-18 2012-09-12 深圳市兆波电子技术有限公司 Method for controlling distance between user and electronic equipment screen and electronic equipment
CN104239860A (en) * 2014-09-10 2014-12-24 广东小天才科技有限公司 Detecting and reminding method and device for sitting posture in using process of intelligent terminal
CN108968905A (en) * 2018-06-19 2018-12-11 湖州师范学院 Method, apparatus, system and the computer readable storage medium to give a test of one's eyesight
CN109157186A (en) * 2018-10-25 2019-01-08 武汉目明乐视健康科技有限公司 Unmanned self-service vision monitoring instrument
CN109431446A (en) * 2018-08-03 2019-03-08 中山大学附属眼科医院验光配镜中心 A kind of online eyesight exam method, device, terminal device and storage medium
CN109700423A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of the Intelligent eyesight detection method and device of automatic perceived distance
CN111387932A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Vision detection method, device and equipment
CN111543934A (en) * 2020-04-29 2020-08-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic product and storage medium
CN111803023A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision value correction method, correction device, terminal equipment and storage medium
CN111915667A (en) * 2020-07-27 2020-11-10 深圳数联天下智能科技有限公司 Sight line identification method, sight line identification device, terminal equipment and readable storage medium
CN112399817A (en) * 2018-02-22 2021-02-23 斯格本斯眼科研究所有限公司 Measuring refraction of eye

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662334A (en) * 2012-04-18 2012-09-12 深圳市兆波电子技术有限公司 Method for controlling distance between user and electronic equipment screen and electronic equipment
CN104239860A (en) * 2014-09-10 2014-12-24 广东小天才科技有限公司 Detecting and reminding method and device for sitting posture in using process of intelligent terminal
CN112399817A (en) * 2018-02-22 2021-02-23 斯格本斯眼科研究所有限公司 Measuring refraction of eye
CN108968905A (en) * 2018-06-19 2018-12-11 湖州师范学院 Method, apparatus, system and the computer readable storage medium to give a test of one's eyesight
CN109431446A (en) * 2018-08-03 2019-03-08 中山大学附属眼科医院验光配镜中心 A kind of online eyesight exam method, device, terminal device and storage medium
CN109157186A (en) * 2018-10-25 2019-01-08 武汉目明乐视健康科技有限公司 Unmanned self-service vision monitoring instrument
CN109700423A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of the Intelligent eyesight detection method and device of automatic perceived distance
CN111387932A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Vision detection method, device and equipment
CN111543934A (en) * 2020-04-29 2020-08-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic product and storage medium
CN111803023A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision value correction method, correction device, terminal equipment and storage medium
CN111915667A (en) * 2020-07-27 2020-11-10 深圳数联天下智能科技有限公司 Sight line identification method, sight line identification device, terminal equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116421135A (en) * 2023-04-27 2023-07-14 北京京东拓先科技有限公司 Vision testing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11442539B2 (en) Event camera-based gaze tracking using neural networks
CN109635621B (en) System and method for recognizing gestures based on deep learning in first-person perspective
CN107105130B (en) Electronic device and operation method thereof
US10044712B2 (en) Authentication based on gaze and physiological response to stimuli
Mulfari et al. Using Google Cloud Vision in assistive technology scenarios
CN108491823B (en) Method and device for generating human eye recognition model
CN109219955A (en) Video is pressed into
CN106462242A (en) User interface control using gaze tracking
CN109154861A (en) Mood/cognitive state is presented
US10254831B2 (en) System and method for detecting a gaze of a viewer
KR20180052002A (en) Method for Processing Image and the Electronic Device supporting the same
CN110705365A (en) Human body key point detection method and device, electronic equipment and storage medium
CN109565548B (en) Method of controlling multi-view image and electronic device supporting the same
CN105380591A (en) Vision detecting device, system and method
US20220358662A1 (en) Image generation method and device
CN110427849B (en) Face pose determination method and device, storage medium and electronic equipment
JP2017162103A (en) Inspection work support system, inspection work support method, and inspection work support program
JP2015152938A (en) information processing apparatus, information processing method, and program
KR102457247B1 (en) Electronic device for processing image and method for controlling thereof
CN113509136A (en) Detection method, vision detection method, device, electronic equipment and storage medium
US10447996B2 (en) Information processing device and position information acquisition method
CN110545386A (en) Method and apparatus for photographing image
CN110018733A (en) Determine that user triggers method, equipment and the memory devices being intended to
CN114732350A (en) Vision detection method and device, computer readable medium and electronic equipment
CN111091388B (en) Living body detection method and device, face payment method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination