CN113032022A - Equipment control method and related equipment - Google Patents

Equipment control method and related equipment Download PDF

Info

Publication number
CN113032022A
CN113032022A CN202110223065.2A CN202110223065A CN113032022A CN 113032022 A CN113032022 A CN 113032022A CN 202110223065 A CN202110223065 A CN 202110223065A CN 113032022 A CN113032022 A CN 113032022A
Authority
CN
China
Prior art keywords
determining
information
user
eye
unlocking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110223065.2A
Other languages
Chinese (zh)
Other versions
CN113032022B (en
Inventor
时准
吴博琦
莫畏
金雅庆
韩璘
张恒煦
杨嘉明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Jianzhu University
Original Assignee
Jilin Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Jianzhu University filed Critical Jilin Jianzhu University
Priority to CN202110223065.2A priority Critical patent/CN113032022B/en
Publication of CN113032022A publication Critical patent/CN113032022A/en
Application granted granted Critical
Publication of CN113032022B publication Critical patent/CN113032022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The application discloses an equipment control method and related equipment, wherein virtual reality equipment detects the wearing state of the virtual reality equipment, and when the virtual reality equipment is detected to be in the wearing state, the first duration of the virtual reality equipment which is in the last non-wearing state is obtained; if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user; determining iris characteristics according to the first image information; when the iris features are matched with the pre-stored iris features, determining the safety index of the pre-stored iris features; when the safety index is smaller than a first threshold value and is larger than or equal to a second threshold value, lightening a display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points; acquiring equipment unlocking information; and when the equipment unlocking information is successfully matched with the preset unlocking information, displaying the display content in front of the information screen of the display screen. According to the method and the device, when the virtual reality equipment is not worn, the watching privacy of the user is protected, and the privacy and the safety of the user using the virtual reality equipment are improved.

Description

Equipment control method and related equipment
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to an apparatus control method and related apparatus.
Background
In the prior art, because the display screen of virtual reality equipment sets up the inside at the casing of virtual reality equipment, consequently other personnel can't see the content that the user was watching through virtual reality equipment simultaneously, thereby make virtual reality equipment have stronger security, but when the user took off virtual reality equipment, and when wearing by other people, will expose the content that the user was watching, thereby the content that leads to the user to watch reveals, expose user's privacy, influence the security and the privacy that the user used virtual reality equipment.
Disclosure of Invention
The embodiment of the application provides a device control method and related devices.
In a first aspect, an embodiment of the present application provides an apparatus unlocking method, which is applied to an apparatus unlocking device of a virtual reality apparatus, where the virtual reality apparatus further includes a photographing device and a display screen, and the apparatus control method includes:
when the virtual reality equipment is detected to be in a wearing state, acquiring a first duration of the virtual reality equipment which is in an unworn state last time;
if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user;
determining the iris feature according to the first image information;
when the iris features are matched with pre-stored iris features, determining the safety index of the pre-stored iris features;
when the safety index is larger than or equal to a first threshold value, lightening the display screen and displaying the display content in front of the information screen of the display screen;
when the safety index is smaller than the first threshold and larger than or equal to a second threshold, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points, and the second threshold is smaller than the first threshold;
controlling the photographing device to acquire a plurality of second image information of the eyes of the user;
determining equipment unlocking information according to the plurality of second image information;
and when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen.
In a second aspect, an embodiment of the present application provides an apparatus unlocking device, where the apparatus unlocking device includes:
the detection unit is used for detecting the wearing state of the virtual reality equipment;
the acquiring unit is used for acquiring a first time length of the virtual reality equipment which is in an unworn state last time when the virtual reality equipment is detected to be in a worn state;
the photographing unit is used for controlling the photographing device to acquire first image information of eyes of a user if the first time length is longer than a first preset time length;
a determining unit configured to determine the iris image from the first image information;
the determining unit is further used for determining the safety index of the pre-stored iris characteristics when the iris characteristics are matched with the pre-stored iris characteristics;
the display unit is used for lightening the display screen and displaying the display content in front of the display screen when the safety index is larger than or equal to a first threshold;
the display unit is further configured to light the display screen and display an unlocking interface when the safety index is smaller than the first threshold and is greater than or equal to a second threshold, where the unlocking interface includes at least two trigger points, and the second threshold is smaller than the first threshold;
the photographing unit is further used for controlling the photographing device to acquire a plurality of second image information of the eyes of the user;
the determining unit is further used for determining equipment unlocking information according to the plurality of second image information;
the display unit is further configured to display the display content in front of the display screen when the device unlocking information is successfully matched with preset unlocking information.
In a third aspect, an embodiment of the present application provides a virtual reality device, including a processor, a memory, a transceiver, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, when it is detected that the virtual reality device is in a worn state, a first duration of the virtual reality device that is in an unworn state last time is obtained; if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user; determining the iris feature according to the first image information; when the iris features are matched with pre-stored iris features, determining the safety index of the pre-stored iris features; when the safety index is larger than or equal to a first threshold value, lightening the display screen and displaying the display content in front of the information screen of the display screen; when the safety index is smaller than the first threshold and larger than or equal to a second threshold, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points, and the second threshold is smaller than the first threshold; controlling the photographing device to acquire a plurality of second image information of the eyes of the user; determining equipment unlocking information according to the plurality of second image information; and when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen. According to the method and the device, the user can effectively protect the watching privacy of the user when the user does not wear the virtual reality equipment, and the privacy and the safety of the user using the virtual reality equipment are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a virtual environment interaction method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an eye image provided in an embodiment of the present application;
fig. 3 is a schematic diagram of an eye contour and a display device provided by an embodiment of the present application in a spatial coordinate system;
FIG. 4 is a schematic diagram of an unlock interface provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a virtual reality device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus unlocking device provided in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the application provides a virtual reality device, and the virtual reality device comprises a photographing device, a display screen and a device unlocking device.
Wherein the photographing direction of the photographing apparatus points to the human eye of the user wearing the virtual display device.
The display screen is used for displaying the image information output by the virtual reality equipment.
The working principle of the virtual reality device of the embodiment of the application is as follows: the equipment unlocking device detects the wearing state of the virtual reality equipment, and when the virtual reality equipment is detected to be in the wearing state, the first duration of the virtual reality equipment in the last non-wearing state is obtained; if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user; determining the iris feature according to the first image information; when the iris features are matched with pre-stored iris features, determining the safety index of the pre-stored iris features; when the safety index is larger than or equal to a first threshold value, lightening the display screen and displaying the display content in front of the information screen of the display screen; when the safety index is smaller than the first threshold and larger than or equal to a second threshold, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points, and the second threshold is smaller than the first threshold; controlling the photographing device to acquire a plurality of second image information of the eyes of the user; determining equipment unlocking information according to the plurality of second image information; and when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen. According to the method and the device, the user can effectively protect the watching privacy of the user when the user does not wear the virtual reality equipment, and the privacy and the safety of the user using the virtual reality equipment are improved.
Referring to fig. 1, fig. 1 is a schematic flow chart of an apparatus control method provided in an embodiment of the present application, and the method is applied to the virtual reality apparatus and includes the following steps.
Step 10, when detecting that the virtual reality equipment is in a wearing state, an equipment unlocking device obtains a first time length of the virtual reality equipment which is in an unworn state last time;
the virtual reality device comprises a wearing state and a non-wearing state, and particularly comprises a wearing touch sensor and is used for detecting the wearing state of the virtual reality device through the wearing sensor.
Optionally, the wearing sensor is an infrared sensor, the infrared sensor is arranged on the virtual reality device, when the user wears the virtual reality device, the infrared sensor is blocked by the user, so that it is determined that the virtual reality device is currently in a wearing state, and when the infrared sensor is not blocked, the virtual reality device is not in the wearing state.
Optionally, the wearing sensor is a contact sensor, when the user wears the virtual reality device, the skin of the user contacts with the contact sensor, so that the virtual reality device is judged to be in a wearing state, and after the user takes off the virtual reality device, the skin of the user is separated from the contact sensor, so that the virtual display device is in an unworn state.
In an embodiment, after a user wears the virtual reality device for 10 minutes and takes off the virtual reality device, the virtual reality device detects that the virtual reality device is switched from a wearing state to an unworn state through a wearing sensor, and starts to record a first duration of the virtual reality device in the unworn state, and after 5 minutes, when the virtual reality device detects that the user continues to use, the virtual reality device stops timing of the first duration and determines that the first duration is 5 minutes. In addition, when the unworn state is detected next time, zero clearing processing is carried out on the last time of 5 minutes of the first time length, so that errors in the next time of recording the unworn state are avoided.
Step 20, if the first duration is longer than a first preset duration, the equipment unlocking device controls the photographing device to acquire first image information of the eyes of the user;
wherein the first preset time period may be 20 seconds, 30 seconds or other time periods.
The first image information is image information obtained by photographing the eye region of the user by the photographing device.
Wherein, control the device of shooing obtains user's first image information, include:
the equipment unlocking device determines the brightness of the eye area of the user through the photographing device.
And the equipment unlocking device adjusts the f-number of the photographing device according to the light brightness.
And the equipment unlocking device photographs the eye region of the user according to the adjusted f-number to acquire first image information.
For example, in order to ensure that the photographing device can acquire a clear and high-contrast image, when acquiring the first image information, the photographing device needs to first determine the brightness of the photographing environment, where the brightness is determined according to the current display image of the display device, and when the brightness of the display image is higher, the brightness is higher, and when the brightness of the display image is lower, the brightness is lower. And after the brightness of the photographing environment is determined, adjusting the f-number of the photographing device according to the brightness, and controlling the photographing device to photograph the eye region of the user to acquire first image information.
Therefore, the light inlet quantity of the photographing device is controlled, the first image information acquired by the photographing device is ensured to have approximate image brightness, and the problem that the difficulty of processing the first image information is increased subsequently due to the fact that the image brightness of the first image information at different moments is inconsistent when the photographing device acquires the first image information is solved.
Optionally, if the first duration is less than a second preset duration, the virtual display device is controlled to continue recording the first duration, where the second preset duration is less than the first preset duration. Specifically, when a user wants to temporarily observe the display content of the display screen in the virtual reality device, the user usually only takes up the virtual reality device to observe, in order to avoid misjudgment of the virtual reality device, when the first duration is smaller than the second preset duration, it is determined that the wearing state judged by the virtual reality device is misjudgment, and the first duration is continuously recorded.
Step 30, the equipment unlocking device determines the iris characteristics according to the first image information;
the iris image refers to an image of an iris in an eye of a user, and specifically, the human eye is composed of a sclera, an iris, a pupil, a crystalline lens, a retina and the like. The iris is an annular part located between the black pupil and the white sclera, and contains the detail characteristics of a plurality of spots, thin lines, crowns, stripes, crypts and the like which are staggered with each other, and because the iris of the human eye has uniqueness, the identity of a user can be identified through the iris of the human eye.
The virtual reality device determines the iris features according to the iris outline, in an implementation manner of the application, the iris features include detail features of mutually staggered spots, filaments, crowns, stripes, recesses and the like, when feature information in the iris outline is determined, an image in the iris outline of the first image information can be matched with prestored features, and when the feature information is successfully matched with the prestored features, the image representing the region is the same as the prestored features.
Step 40, when the iris features are matched with the pre-stored iris features, the equipment unlocking device determines the safety index of the pre-stored iris features;
wherein the safety parameter is 10, 20 or other values.
Wherein determining a safety index for a pre-stored iris feature when the iris feature matches the pre-stored iris feature comprises:
and when the iris features are matched with the pre-stored iris features, the equipment unlocking device determines the identity information corresponding to the pre-stored iris features.
And the equipment unlocking device determines the safety index according to the identity information.
Wherein, the identity information is user A or user B or other users.
In an implementation of this application, work as virtual reality equipment detects the iris characteristic is with prestoring when the iris characteristic matches, acquire prestore the identity information that the iris characteristic corresponds, then whether virtual reality equipment seeks to have saved the correspondence identity information works as virtual reality equipment includes during the identity information, definite with the safety index that identity information corresponds, it is concrete, when the user is wearing during the virtual reality equipment, the iris characteristic of user can be gathered to virtual reality equipment to record this user and wear the wearing time of virtual reality equipment, thereby convenience of customers is to using the personnel of virtual reality equipment differentiate. The user can set different safety indexes for different users through the virtual reality equipment, so that the information content which can be checked by different people can be adjusted.
For example, the virtual reality device includes identity information of a plurality of users, the identity information corresponds to a pre-stored iris feature and a safety factor of the user, and a corresponding relationship between the identity information and the safety factor of the virtual reality device is shown in table 1.
TABLE 1
Figure BDA0002954432470000071
Figure BDA0002954432470000081
For example, when the virtual reality device performs matching according to the iris features and the pre-stored iris features and determines that the identity information of the user is the user C, the safety index corresponding to the user C is 14, and when the identity information of the user is the user B, the safety index corresponding to the user B is 20.
Step 50, when the safety index is larger than or equal to a first threshold value, lighting the display screen by the equipment unlocking device, and displaying the display content in front of the display screen;
wherein the first threshold is 20, 30 or other values.
In an embodiment, the first threshold is 27, and when the virtual reality device determines that the safety index corresponding to the pre-stored iris feature is 32, the safety index is greater than the first threshold, so that the virtual reality device lights up the display screen and displays the display content in front of the display screen.
Step 60, when the safety index is smaller than the first threshold and is greater than or equal to a second threshold, the device unlocking apparatus lights up the display screen and displays an unlocking interface, the unlocking interface includes at least two trigger points, and the second threshold is smaller than the first threshold;
wherein the second threshold is 5, 10, or other value.
In an embodiment, the first threshold is 27, the second threshold is 13, and when the virtual reality device determines that the safety index is 20, since the safety index is greater than the second threshold and smaller than the first threshold, after the virtual reality device lights up the display screen, an unlocking interface is displayed, where the unlocking interface includes at least two trigger points.
In another embodiment, when the virtual reality device determines that the safety index is 10, since the safety index is smaller than the second threshold, the virtual reality device sends a prompt message to a currently worn user that the content of the display screen cannot be viewed after the display screen is lit up.
Step 70, the equipment unlocking device controls the photographing device to acquire a plurality of second image information of the eyes of the user;
wherein, control the device of shooing obtains a plurality of second image information of user's eyes, include:
the equipment unlocking device determines the brightness of the eye area of the user through the photographing device.
And the equipment unlocking device adjusts the f-number of the photographing device according to the light brightness.
And the equipment unlocking device photographs the eye region of the user according to the adjusted f-number to acquire second image information.
For example, in order to ensure that the photographing device can acquire a clear image with high contrast, when the photographing device acquires the second image information, the luminance brightness in the photographing environment needs to be determined first, the luminance brightness is determined according to the current display image of the display screen, when the luminance of the display image is high, the luminance brightness is high, and when the luminance of the display image is low, the luminance brightness is low. And after the brightness of the photographing environment is determined, adjusting the f-number of the photographing device according to the brightness, and controlling the photographing device to photograph the eye region of the user to acquire second image information.
Therefore, the second image information acquired by the photographing device is ensured to have approximate image brightness by controlling the light incoming amount of the photographing device, so that the problem that the subsequent difficulty in processing the second image information is increased due to inconsistent image brightness of the second image information at different moments when the photographing device acquires the second image information is solved.
And step 80, determining equipment unlocking information according to the plurality of second image information.
Wherein the determining device unlocking information according to the plurality of second image information comprises:
and the equipment unlocking device determines the observation area of the user on the display screen according to the iris characteristics.
And the equipment unlocking device determines the trigger point corresponding to the eye image according to the observation region.
And the equipment unlocking device determines the eye action information of the user according to the eye image and the corresponding photographing time.
And the equipment unlocking device determines the track unlocking information according to the trigger point and the corresponding photographing time.
And the equipment unlocking device determines equipment unlocking information according to the track unlocking information and the part action information.
In an implementation manner of the present application, the determining, by the control device, an observation region of a user on the display screen according to the eye image includes:
the control device determines the eye contour and the iris contour of the user according to the eye image;
the control device determines the observation direction of the user according to the eye contour and the iris contour;
the control device determines the observation area of the user on the display device according to the observation direction.
Wherein the determining the eye contour and the iris contour of the user according to the eye image comprises:
the virtual reality equipment carries out gray level processing on the first image information;
the virtual reality equipment determines an eye contour according to the first image information and a first preset gray value;
and the virtual reality equipment determines the iris outline according to the eye outline and a second preset gray value, wherein the second preset gray value is smaller than the first preset gray value.
Wherein the performing the gray scale processing on the first image information comprises:
and the virtual reality equipment determines color information of the pixel points of the first image information, wherein the color information comprises red brightness, green brightness and red brightness.
The virtual reality device determines the gray value of the pixel point according to a first formula and the color information, wherein the first formula is G-R a1+ G a2+ B a3, G represents the gray value of the pixel point, R represents the red brightness of the pixel point, G represents the green brightness of the pixel point, B represents the blue brightness of the pixel point, a1 represents a first reference coefficient, a2 represents a second reference coefficient, a3 represents a third reference coefficient, and in addition, a1+ a2+ a3 is 100%. In one embodiment, a1 is 30%, a2 is 40%, and a3 is 30%.
For example, when a pixel of the first image information is a color pixel, it is determined that the red luminance of the pixel is 210, the blue luminance is 50, and the green luminance is 100, where a1 is 30%, a2 is 40%, and a3 is 30%, so that the gray-level value of the pixel is G210 + 30% +50 + 40% +100 + 30% + 113.
Optionally, determining an eye contour according to the first image information and a first preset gray value; and determining the iris outline according to the eye outline and a second preset gray value, wherein the second preset gray value is smaller than the first preset gray value.
Wherein the first preset grey scale value is 30, 50 or other values.
Wherein the second preset gray value is 10, 20 or other values.
In an embodiment, because the eyeball structure of the human eye includes a sclera and an iris, and the color of the sclera is different from the color of the iris, the eyeball structure of the human eye is located in the eye contour of the human eye, after the gray processing is performed on the first image, pixel points with gray values smaller than the first preset gray value in the first image information are determined, and the eye contour is determined according to the pixel points.
In the eye contour, because the iris of the human eye is located in the eye contour, and the gray value of the pixel point corresponding to the iris is smaller than that of the pixel point of the eye contour, the pixel point smaller than the preset gray value in the eye contour can be determined according to the eye contour and the second preset gray value, and the iris contour of the human eye is determined according to the pixel points.
In an implementation manner of the present application, the step of determining the observation direction of the user according to the eye contour and the iris contour includes:
determining a first center position of the eye contour and a second center position of the iris contour;
establishing a space coordinate system comprising an X axis, a Y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the X axis and the Y axis;
and determining the observation direction according to the first preset position and the second central position.
Wherein the viewing direction refers to a direction of eye gaze of a user when viewing the display screen.
Wherein the determining a first center position of the eye contour and a second center position of the iris contour comprises:
determining a first pixel point and a second pixel point of the eye contour in a first direction and a third pixel point and a fourth pixel point along a second direction, wherein the first pixel point and the second pixel point are two side end points of the edge of the eye contour along the first direction, and the third pixel point and the fourth pixel point are two side end points of the edge of the eye contour along the second direction;
and determining the intersection point of the connecting line of the first pixel point and the second pixel point and the connecting line of the third pixel point and the fourth pixel point as the first central position of the eye contour.
Referring to fig. 2, in a specific embodiment, the first pixel point is a, the second pixel point is B, the third pixel point is C, and the fourth pixel point is D, the a and the B are connected to obtain a line segment AB, the C and the D are connected to obtain a line segment CD, and then an intersection point of the line segment AB and the line segment CD is a first central position of the eye contour.
The determination of the second center position of the iris outline is the same as the determination of the first center position of the eye outline, and is not described herein again.
After the first central position and the second central position are determined, referring to fig. 3, a spatial coordinate system including an X axis, a Y axis and a z axis is established with the first central position as an origin, wherein the second central position is located on a plane formed by the X axis and the Y axis; determining the observation direction according to the second formula, the first preset position and the second central position, wherein the second formula is
Figure BDA0002954432470000111
Wherein, A is the contained angle of viewing direction and y axle, x1 is the coordinate of second central point along the x axle direction, z1 is the coordinate of second central point along the z axle direction, y2 is the coordinate of first default position along the y axle direction.
In one embodiment, the origin of the spatial coordinate system is the first center position, the coordinates (0, 0, 0) of the first center position are the coordinates (2, 0, 1) of the second center position, the coordinates of the first predetermined position are (0, -5, 0),therefore, the observing direction can be determined according to the second formula and the coordinates of the first preset position and the coordinates of the second central position, and the included angle between the observing direction and the x axis is
Figure BDA0002954432470000121
In an implementation manner of the present application, the determining a viewing area of a user on the display screen according to the viewing direction includes:
the equipment unlocking device determines a first distance between the display screen and human eyes of a user;
and the equipment unlocking device determines a corresponding observation area of the user on the display screen according to the observation direction and the first distance.
Wherein the first distance is 10mm, 20mm or other value.
In an embodiment, the first distance may be a preset parameter of the virtual reality device.
In another embodiment, the virtual reality device comprises a distance measuring sensor, and the first distance is a distance between the display screen and the human eyes of the user measured by the distance measuring sensor.
After the first distance is determined, the virtual reality device can determine the position of the display screen in the space coordinate system, and then determine the corresponding observation area of the user on the display screen according to the observation direction and the position of the first display screen in the space coordinate system.
In an embodiment, the first distance is 20mm, then the distance from the origin to the display screen on the spatial coordinate system along the y-axis direction is 20 units, the viewing direction forms an angle of 24 ° with the x-axis of the spatial coordinate system, and the straight line where the viewing direction is located passes through the second center position, the intersection point of the viewing direction and the display screen is the viewing area, and the coordinate position of the viewing area is (10, 20, 5).
In an implementation manner of the present application, the determining, by the device unlocking apparatus, a target trigger point corresponding to the eye image according to the observation region includes:
determining a trigger point with a distance from the observation area smaller than a preset distance;
and determining the trigger point with the minimum distance from the observation area as a target trigger point.
In an embodiment, the display interface on the display screen includes a plurality of trigger points, and when the coordinate position of the viewing area is determined to be (10, 20, 5), the coordinates of the plurality of trigger points are (11, 20, 6), (-5, 20, 3), (-6, 20, 8) and (0, 20, 10), respectively, then the distance between the viewing area and the trigger point whose coordinate is (11, 20, 6) is the minimum, so that the trigger point whose coordinate is (11, 20, 6) is the target trigger point.
In an implementation manner of the present application, the determining the eye movement information of the user according to the eye image and the corresponding photographing time includes:
determining the eye contour of the user's eyes according to the eye image;
determining the opening and closing size of the eyes of the user according to the eye contour;
and determining the eye action information according to the opening and closing size and the corresponding photographing time.
The method for determining the eye contour of the user's eyes by the virtual reality device according to the eye image is the same as the above method, and the description is omitted again.
In an implementation manner of the present application, the virtual reality device determines the opening and closing size of the user's eyes by determining the eye contour of the user's eyes, and specifically, since the opening and closing size of the user's eyes can be determined by the relative distance between the upper eye curtain and the lower eye curtain during the opening and closing process of the user's eyes, the opening and closing size of the user's eyes can be determined by the size of the eye contour along the z-axis direction.
The virtual reality device determines the eye movement information according to the opening and closing sizes and the corresponding photographing time, specifically, the virtual reality device determines the eye movement information after determining a plurality of opening and closing sizes and the photographing time, specifically, the eye movement information includes the movement information of the eyes of the user, for example, when the photographing time is 1.0 second, the opening and closing sizes of the eyes of the user are 2mm, and when the photographing time is 1.1 second, the opening and closing sizes of the eyes of the user are 1.2 mm; and when the photographing time is 1.4 seconds, the opening and closing size of the eyes of the user is 0.2mm, and the eye action information is judged to be the eye closing action according to the three groups of data. When the fact that the opening and closing size of the eyes of the user is gradually increased along with the increase of the photographing time is detected, the eye opening action information is judged to be the eye opening action; and when the opening and closing size of the eyes of the user is detected to be periodically increased and decreased, judging that the eye movement information is a blinking movement.
In an implementation manner of the present application, the virtual reality device determines the trajectory unlocking information according to the target trigger point and the corresponding photographing time, specifically, the virtual reality device obtains a plurality of target trigger points and corresponding photographing time, and determines the change information of the target trigger point according to the photographing time.
In an embodiment, as shown in fig. 4, the unlocking interface includes A, B, C, D, E trigger points which are 5 in total, the virtual reality device obtains 3 target trigger points, where the first target trigger point is an a trigger point, the photographing time is 1.4 seconds, the second target trigger point is a C trigger point, the photographing time is 1.7 seconds, the third target trigger point is a B trigger point, and the photographing time is 2.1 seconds, and then it is determined that the trajectory unlocking information passes through the a trigger point, the C trigger point, and the B trigger point in sequence according to the three target trigger points and the corresponding photographing time.
In an implementation manner of the application, the device unlocking information is determined according to the trajectory unlocking information and the eye action information. Specifically, the unlocking mode of the virtual reality device comprises track unlocking and action unlocking, when the eye action information of the user is detected and after the track unlocking information is detected, the virtual reality device firstly determines a target trigger point according to the track unlocking information, and then determines the eye action at the target trigger point according to the eye action information, so that the virtual reality device can conveniently perform unlocking operation.
In one embodiment, the unlocking interface of the virtual reality device comprises A, B, C, D, E trigger points, and the specific unlocking scheme comprises that after the A trigger point performs two blinking motions, the A trigger point is moved to the B trigger point to perform one blinking motion, and then the A trigger point is moved to the E trigger point to open the eyes. The virtual reality device detects that the device unlocking information of the user is: after the trigger point A carries out blinking actions twice, the trigger point B is moved to carry out blinking actions twice, then the trigger point C is moved to open eyes, the equipment unlocking information is not matched with the information preset by the user after unlocking, therefore, the unlocking failure of the virtual reality equipment is represented, and the virtual reality equipment outputs prompt information of the unlocking failure to the user through the display screen.
And step 90, when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen.
In the above embodiment, when the device unlocking information is successfully matched with the preset unlocking information of the virtual reality device, the virtual display device directly displays the display content in front of the information screen to the current user, so that the user can conveniently observe and operate through the virtual reality device.
In an implementation manner of the application, when it is detected that the virtual reality device is in a wearing state, a first duration of the virtual reality device in an unworn state is obtained; when the first time length is longer than a first preset time length, acquiring first image information of a user; determining the iris image according to the first image information; when the iris image is matched with a pre-stored image, determining a safety index of the pre-stored image; when the safety index is larger than a first threshold value, lightening the display screen and displaying the display content in front of the display screen information screen; when the safety index is smaller than the first threshold value and larger than a second threshold value, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points; acquiring a plurality of second image information; determining equipment unlocking information according to the plurality of second image information; and when the unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen. The method and the device have the advantages that the watching privacy of the user can be effectively protected when the user does not wear the virtual reality equipment, and the privacy and the safety of the user using the virtual reality equipment are improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a virtual reality device according to an embodiment of the present disclosure, and as shown in the drawing, the service device includes a processor, a memory, a transceiver port, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
when the virtual reality equipment is detected to be in a wearing state, acquiring a first duration of the virtual reality equipment which is in an unworn state last time;
if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user;
determining the iris feature according to the first image information;
when the iris features are matched with pre-stored iris features, determining the safety index of the pre-stored iris features;
when the safety index is larger than or equal to a first threshold value, lightening the display screen and displaying the display content in front of the information screen of the display screen;
when the safety index is smaller than the first threshold and larger than or equal to a second threshold, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points, and the second threshold is smaller than the first threshold;
controlling the photographing device to acquire a plurality of second image information of the eyes of the user;
determining equipment unlocking information according to the plurality of second image information;
and when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen.
In an implementation manner of the present application, in determining device unlocking information from the plurality of second eye images, the program includes instructions specifically configured to:
determining an observation area of a user on the display screen according to the eye image;
determining a target trigger point corresponding to the eye image according to the observation region;
determining the eye action information according to the eye image and the corresponding photographing time;
determining the track unlocking information according to the target trigger point and the corresponding photographing time;
and determining the equipment unlocking information according to the track unlocking information and the eye action information.
In an implementation of the application, in determining the viewing area of the user on the display screen from the eye image, the program includes instructions specifically configured to:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display screen according to the observation direction.
In an implementation of the application, the above program comprises instructions for performing the following steps in particular in determining a viewing direction of a user from the eye contour and the iris contour:
determining a first center position of the eye contour and a second center position of the iris contour;
establishing a space coordinate system comprising an x axis, a y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the x axis and the y axis;
determining the viewing direction from the first center position and the second center position.
In an implementation manner of the present application, in determining the eye movement information according to the eye image and the corresponding photographing time, the program includes instructions specifically configured to:
determining the opening and closing size of the eyes of the user according to the eye image;
and determining the eye action information according to the opening and closing size and the corresponding photographing time.
Referring to fig. 6, fig. 6 is a virtual environment interaction apparatus according to an embodiment of the present application, where the apparatus includes:
a detecting unit 310, configured to detect a wearing state of the virtual reality device;
an obtaining unit 320, configured to obtain, when it is detected that the virtual reality device is in a wearing state, a first duration of the virtual reality device that is in an unworn state last time;
the photographing unit 330 is configured to control the photographing device to obtain first image information of eyes of a user if the first duration is greater than a first preset duration;
a determining unit 340 for determining the iris image according to the first image information;
the determining unit 340 is further configured to determine a safety index of a pre-stored iris feature when the iris feature matches the pre-stored iris feature;
the display unit 350 is configured to light the display screen and display the display content in front of the display screen when the safety index is greater than or equal to a first threshold;
the display unit 350 is further configured to, when the safety index is smaller than the first threshold and is greater than or equal to a second threshold, light up the display screen and display an unlocking interface, where the unlocking interface includes at least two trigger points, and the second threshold is smaller than the first threshold;
the photographing unit 330 is further configured to control the photographing apparatus to acquire a plurality of second image information of the eyes of the user;
the determining unit 340 is further configured to determine device unlocking information according to the plurality of second image information;
the display unit 350 is further configured to display the display content in front of the display screen when the device unlocking information is successfully matched with preset unlocking information.
In an implementation manner of the present application, in determining the observation direction of the user according to the eye image, the determining unit 340 is specifically configured to:
determining an observation area of a user on the display screen according to the eye image;
determining a target trigger point corresponding to the eye image according to the observation region;
determining the eye action information according to the eye image and the corresponding photographing time;
determining the track unlocking information according to the target trigger point and the corresponding photographing time;
and determining the equipment unlocking information according to the track unlocking information and the eye action information.
In an implementation manner of the present application, in determining an observation region of the user on the display screen according to the eye image, the determining unit 340 is specifically configured to:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display screen according to the observation direction.
In an implementation manner of the present application, in determining the viewing direction of the user according to the eye contour and the iris contour, the determining unit 340 is specifically configured to:
determining a first center position of the eye contour and a second center position of the iris contour;
establishing a space coordinate system comprising an x axis, a y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the x axis and the y axis;
determining the viewing direction from the first center position and the second center position.
In an implementation manner of the present application, in determining the eye movement information according to the eye image and the corresponding photographing time, the determining unit 340 is specifically configured to:
determining the opening and closing size of the user eyes according to the eye image;
and determining the eye action information according to the opening and closing size and the corresponding photographing time.
It should be noted that the detecting unit 310, the acquiring unit 320, and the determining unit 340 may be implemented by a processor, the photographing unit 330 may be implemented by an image capturing device, and the display unit 350 may be implemented by a display screen.
The present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the service device in the above method embodiments.
Embodiments of the present application also provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in the service apparatus in the method. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. The equipment control method is characterized by being applied to an equipment unlocking device of virtual reality equipment, wherein the virtual reality equipment further comprises a photographing device and a display screen, and the equipment control method comprises the following steps:
when the virtual reality equipment is detected to be in a wearing state, acquiring a first duration of the virtual reality equipment which is in an unworn state last time;
if the first time length is longer than a first preset time length, controlling the photographing device to acquire first image information of eyes of a user;
determining the iris feature according to the first image information;
when the iris features are matched with pre-stored iris features, determining the safety index of the pre-stored iris features;
when the safety index is larger than or equal to a first threshold value, lightening the display screen and displaying the display content in front of the information screen of the display screen;
when the safety index is smaller than the first threshold and larger than or equal to a second threshold, lightening the display screen and displaying an unlocking interface, wherein the unlocking interface comprises at least two trigger points, and the second threshold is smaller than the first threshold;
controlling the photographing device to acquire a plurality of second image information of the eyes of the user;
determining equipment unlocking information according to the plurality of second image information;
and when the equipment unlocking information is successfully matched with preset unlocking information, displaying the display content in front of the display screen information screen.
2. The device control method according to claim 1, wherein the unlocking information includes trajectory unlocking information and eye movement information, the second image information includes a plurality of eye images and photographing time corresponding to each eye image, and the determining the device unlocking information according to the plurality of second eye images includes:
determining an observation area of a user on the display screen according to the eye image;
determining a target trigger point corresponding to the eye image according to the observation region;
determining the eye action information according to the eye image and the corresponding photographing time;
determining the track unlocking information according to the target trigger point and the corresponding photographing time;
and determining the equipment unlocking information according to the track unlocking information and the eye action information.
3. The device control method according to claim 2, wherein the determining a viewing area of a user on the display screen from the eye image comprises:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display screen according to the observation direction.
4. The device control method according to claim 3, wherein the determining a viewing direction of the user from the eye contour and the iris contour comprises:
determining a first center position of the eye contour and a second center position of the iris contour;
establishing a space coordinate system comprising an x axis, a y axis and a z axis by taking the first central position as an origin, wherein the second central position is positioned on a plane formed by the x axis and the y axis;
determining the viewing direction from the first center position and the second center position.
5. The device control method according to claim 2, wherein the determining the eye movement information according to the eye image and the corresponding photographing time includes:
determining the opening and closing size of the eyes of the user according to the eye image;
and determining the eye action information according to the opening and closing size and the corresponding photographing time.
6. An apparatus unlocking device, characterized by comprising:
the detection unit is used for detecting the wearing state of the virtual reality equipment;
the acquiring unit is used for acquiring a first time length of the virtual reality equipment which is in an unworn state last time when the virtual reality equipment is detected to be in a worn state;
the photographing unit is used for controlling the photographing device to acquire first image information of eyes of a user if the first time length is longer than a first preset time length;
a determining unit configured to determine the iris image from the first image information;
the determining unit is further used for determining the safety index of the pre-stored iris characteristics when the iris characteristics are matched with the pre-stored iris characteristics;
the display unit is used for lightening the display screen and displaying the display content in front of the display screen when the safety index is larger than or equal to a first threshold;
the display unit is further configured to light the display screen and display an unlocking interface when the safety index is smaller than the first threshold and is greater than or equal to a second threshold, where the unlocking interface includes at least two trigger points, and the second threshold is smaller than the first threshold;
the photographing unit is further used for controlling the photographing device to acquire a plurality of second image information of the eyes of the user;
the determining unit is further used for determining equipment unlocking information according to the plurality of second image information;
the display unit is further configured to display the display content in front of the display screen when the device unlocking information is successfully matched with preset unlocking information.
7. The device unlocking apparatus according to claim 6, wherein in determining the device unlocking information from the plurality of second eye images, the determining unit is specifically configured to:
determining an observation area of a user on the display screen according to the eye image;
determining a target trigger point corresponding to the eye image according to the observation region;
determining the eye action information according to the eye image and the corresponding photographing time;
determining the track unlocking information according to the target trigger point and the corresponding photographing time;
and determining the equipment unlocking information according to the track unlocking information and the eye action information.
8. The device unlocking apparatus according to claim 7, wherein, in determining the observation region of the user on the display screen according to the eye image, the determining unit is specifically configured to:
determining an eye contour and an iris contour of the user according to the eye image;
determining the observation direction of the user according to the eye contour and the iris contour;
and determining the observation area of the user on the display screen according to the observation direction.
9. A virtual reality device comprising a processor, memory, a transceiver, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202110223065.2A 2021-02-26 2021-02-26 Equipment control method and related equipment Active CN113032022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110223065.2A CN113032022B (en) 2021-02-26 2021-02-26 Equipment control method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110223065.2A CN113032022B (en) 2021-02-26 2021-02-26 Equipment control method and related equipment

Publications (2)

Publication Number Publication Date
CN113032022A true CN113032022A (en) 2021-06-25
CN113032022B CN113032022B (en) 2022-11-11

Family

ID=76464881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110223065.2A Active CN113032022B (en) 2021-02-26 2021-02-26 Equipment control method and related equipment

Country Status (1)

Country Link
CN (1) CN113032022B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022054A (en) * 2016-05-26 2016-10-12 广东小天才科技有限公司 Unlocking method and device for intelligent wearable equipment
US20170115736A1 (en) * 2013-04-10 2017-04-27 Google Inc. Photo-Based Unlock Patterns
CN107169338A (en) * 2017-07-25 2017-09-15 上海闻泰电子科技有限公司 Unlocking method and device
CN108804006A (en) * 2018-05-24 2018-11-13 广东小天才科技有限公司 For the unlocking method of wearable device, device, equipment and storage medium
CN108958573A (en) * 2017-05-26 2018-12-07 阿里巴巴集团控股有限公司 Identity identifying method and device based on virtual reality scenario
CN109145566A (en) * 2018-09-08 2019-01-04 太若科技(北京)有限公司 Method, apparatus and AR glasses based on blinkpunkt information unlock AR glasses
CN109299678A (en) * 2018-09-08 2019-02-01 太若科技(北京)有限公司 A kind of method, tripper and AR glasses using iris unlock AR glasses
CN109661668A (en) * 2016-06-24 2019-04-19 快图有限公司 Image processing method and system for iris recognition
TW201917625A (en) * 2017-10-17 2019-05-01 群邁通訊股份有限公司 Unlocking system, unlocking method and electronic device
US20190295429A1 (en) * 2013-02-14 2019-09-26 Steven M. McHugh System and Method for displaying personalized interactive and/or instructive data
CN111062018A (en) * 2019-11-19 2020-04-24 广州恒龙信息技术有限公司 Method for unlocking AR glasses based on sclera, unlocking device and AR glasses
CN211087230U (en) * 2019-08-28 2020-07-24 南京深视光点科技有限公司 Eyeball tracking unlocking system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190295429A1 (en) * 2013-02-14 2019-09-26 Steven M. McHugh System and Method for displaying personalized interactive and/or instructive data
US20170115736A1 (en) * 2013-04-10 2017-04-27 Google Inc. Photo-Based Unlock Patterns
CN106022054A (en) * 2016-05-26 2016-10-12 广东小天才科技有限公司 Unlocking method and device for intelligent wearable equipment
CN109661668A (en) * 2016-06-24 2019-04-19 快图有限公司 Image processing method and system for iris recognition
CN108958573A (en) * 2017-05-26 2018-12-07 阿里巴巴集团控股有限公司 Identity identifying method and device based on virtual reality scenario
CN107169338A (en) * 2017-07-25 2017-09-15 上海闻泰电子科技有限公司 Unlocking method and device
TW201917625A (en) * 2017-10-17 2019-05-01 群邁通訊股份有限公司 Unlocking system, unlocking method and electronic device
CN108804006A (en) * 2018-05-24 2018-11-13 广东小天才科技有限公司 For the unlocking method of wearable device, device, equipment and storage medium
CN109145566A (en) * 2018-09-08 2019-01-04 太若科技(北京)有限公司 Method, apparatus and AR glasses based on blinkpunkt information unlock AR glasses
CN109299678A (en) * 2018-09-08 2019-02-01 太若科技(北京)有限公司 A kind of method, tripper and AR glasses using iris unlock AR glasses
CN211087230U (en) * 2019-08-28 2020-07-24 南京深视光点科技有限公司 Eyeball tracking unlocking system
CN111062018A (en) * 2019-11-19 2020-04-24 广州恒龙信息技术有限公司 Method for unlocking AR glasses based on sclera, unlocking device and AR glasses

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
周伟: ""基于深度学习的视频人脸检测和目标跟踪"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
周伟: ""基于深度学习的视频人脸检测和目标跟踪"", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 August 2019 (2019-08-15), pages 138 - 1032 *
李卓等: ""车载主动式红外夜视系统中带通滤光膜工艺研究"", 《真空科学与技术学报》 *
李卓等: ""车载主动式红外夜视系统中带通滤光膜工艺研究"", 《真空科学与技术学报》, 15 July 2019 (2019-07-15), pages 557 - 561 *

Also Published As

Publication number Publication date
CN113032022B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN107272904B (en) Image display method and electronic equipment
CN109993115B (en) Image processing method and device and wearable device
CN104834901B (en) A kind of method for detecting human face, apparatus and system based on binocular stereo vision
US20140201844A1 (en) Detection of and privacy preserving response to observation of display screen
US9774830B2 (en) Imaging apparatus and imaging method
CN111124104B (en) Gaze tracking using mapping of pupil center locations
JPWO2016098406A1 (en) Information processing apparatus, information processing method, and program
CN104699250B (en) Display control method and device, electronic equipment
US20150317956A1 (en) Head mounted display utilizing compressed imagery in the visual periphery
US20170337740A1 (en) Contact lens virtual fitting method and device, and computer program for executing contact lens virtual fitting method
US20110170060A1 (en) Gaze Tracking Using Polarized Light
AU2012365030A1 (en) Device and method for controlling rotation of displayed image
EP3649577B1 (en) Application to determine reading/working distance
TW201437682A (en) Method for adjusting head mounted display adaptively and head-mounted display
CN110568930B (en) Method for calibrating fixation point and related equipment
CN106650661A (en) Terminal usage state detection method and apparatus
WO2019237838A1 (en) Parameter adjustment method and apparatus for wearable device, wearable device and storage medium
US11969210B2 (en) Methods and apparatus for making a determination about an eye using color temperature adjusted lighting
CN112597931A (en) Screen state detection method and device, electronic equipment, server and storage medium
CN111513670A (en) Estimation of corneal radius for use in eye tracking
JPWO2019021601A1 (en) Information processing apparatus, information processing method, and program
US20210378509A1 (en) Pupil assessment using modulated on-axis illumination
KR20200144196A (en) Electronic device and method for providing function using corneal image thereof
CN106708249B (en) Interaction method, interaction device and user equipment
CN114239093A (en) Display control method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant