CN115562497B - Augmented reality information interaction method, augmented reality device, and storage medium - Google Patents

Augmented reality information interaction method, augmented reality device, and storage medium Download PDF

Info

Publication number
CN115562497B
CN115562497B CN202211380357.8A CN202211380357A CN115562497B CN 115562497 B CN115562497 B CN 115562497B CN 202211380357 A CN202211380357 A CN 202211380357A CN 115562497 B CN115562497 B CN 115562497B
Authority
CN
China
Prior art keywords
information
interaction
layer
interaction information
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211380357.8A
Other languages
Chinese (zh)
Other versions
CN115562497A (en
Inventor
黄海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shunwei Technology Co ltd
Original Assignee
Zhejiang Shunwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shunwei Technology Co ltd filed Critical Zhejiang Shunwei Technology Co ltd
Priority to CN202211380357.8A priority Critical patent/CN115562497B/en
Publication of CN115562497A publication Critical patent/CN115562497A/en
Application granted granted Critical
Publication of CN115562497B publication Critical patent/CN115562497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an augmented reality information interaction method, an augmented reality device and a storage medium. The method comprises the following steps: determining sight focus information of a user on an augmented reality object, wherein the sight focus information comprises a focus position and a gaze distance, the augmented reality object comprises a real object and virtual information overlapped with the real object, and the virtual information comprises multiple layers of interaction information corresponding to different depths of field; and controlling the display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance. The apparatus includes: the display screen emits an interaction beam carrying interaction information; a reflecting mirror transmitting visible light and reflecting the interaction beam to the human eye; an eye movement detection member that detects a focus of a line of sight of a user; at least one processor; a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality information interaction method.

Description

Augmented reality information interaction method, augmented reality device, and storage medium
Technical Field
The application relates to the technical field of augmented reality, in particular to an augmented reality information interaction method, an augmented reality device and a storage medium.
Background
Augmented reality (Augmented Reality, AR for short) technology is to superimpose virtual information and real world on the same screen so that a user in the body can interact with virtual and real scenes in a natural manner in three-dimensional real-time. The AR technology realizes seamless connection of virtual information and the real world, provides a channel for the connection of the virtual world and the real world, and greatly reduces the time and workload consumed by a user in switching and corresponding virtual and real.
Because virtual information corresponding to a real object is complex and diverse, the virtual information cannot be displayed on the same interactive information layer, or even can be displayed on the same interactive information layer, the virtual information is very dense and complex, and eye fatigue is very easy to cause. Therefore, in the related art, the virtual information is generally classified so as to be respectively presented on different interaction information layers, and the switching display of the interaction information of different layers is realized by detecting the blink frequency or the stay time of the sight of the user. However, the frequency of blinking of the user is not easily controlled, and is extremely liable to cause erroneous operation, and frequent blinking is also liable to cause eyestrain. In addition, the manner in which the line of sight stays long also easily causes misoperation, and the response time is slow, resulting in poor user experience.
Disclosure of Invention
The augmented reality information interaction method, the augmented reality device and the storage medium provided by the embodiment of the application can solve or partially solve the above-mentioned defects in the prior art or other defects in the prior art.
The augmented reality information interaction method provided according to the first aspect of the application comprises the following steps:
determining sight focus information of a user on an augmented reality object, wherein the sight focus information comprises a focus position and a gaze distance, the augmented reality object comprises a reality object and virtual information overlapped with the reality object, and the virtual information comprises multiple layers of interaction information corresponding to different depths of field; and
and controlling a display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance.
According to one embodiment of the present application, determining gaze focus information of a user on an augmented reality object includes:
determining the gaze focus information of a user on the real object; or alternatively
And determining the sight focus information of the user on the virtual information.
According to one embodiment of the application, the multi-layer interaction information comprises a previous layer interaction information and a next layer interaction information;
And controlling a display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance, wherein the method comprises the following steps:
controlling the display screen to display the previous layer of interaction information and blurring the next layer of interaction information in response to the fact that the focus position is in a preset interaction area and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information; and
controlling the display screen to display the next-layer interaction information and blurring the previous-layer interaction information in response to the fact that the focus position is determined to be in the preset interaction area and the gaze distance is a preset depth of field corresponding to the next-layer interaction information;
the preset interaction area is a specified area on the real object or a specified area on the virtual information.
According to one embodiment of the present application, the display area of the previous layer of interaction information is partially overlapped with the display area of the next layer of interaction information, and the preset interaction area is a portion where the display area of the previous layer of interaction information and the display area of the next layer of interaction information are overlapped with each other.
According to one embodiment of the present application, the next-layer interaction information includes at least one piece of sub-information of the previous-layer interaction information.
According to one embodiment of the present application, the next layer of interaction information includes a plurality of pieces of the sub information;
wherein, control the display screen to display the next layer of interaction information and virtualize the previous layer of interaction information, include:
and controlling the display screen to sequentially display a plurality of pieces of sub-information according to a preset sequence, and blurring the interaction information of the previous layer.
According to an embodiment of the present application, according to the focal position and the gaze distance, controlling the display screen to switch and display the multi-layer interaction information further includes:
responding to the fact that the focus position deviates from the preset interaction area and the gazing distance is a preset depth of field corresponding to the previous layer of interaction information, controlling the display screen to keep displaying the previous layer of interaction information and closing the next layer of interaction information; and
and responding to the fact that the focus position deviates from the preset interaction area and the gazing distance is a preset depth of field corresponding to the next-layer interaction information, controlling the display screen to keep displaying the next-layer interaction information and closing the last-layer interaction information.
According to one embodiment of the present application,
the virtual information also comprises a main icon and a plurality of sub icons;
And controlling a display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance, wherein the method comprises the following steps:
controlling the display screen to display the previous layer of interaction information and blurring a plurality of sub-icons and the next layer of interaction information in response to determining that the focus position is in the main icon and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information; and
and in response to determining that the focus position is in a target sub-icon and the gaze distance is a preset depth of field corresponding to the next-layer interaction information, controlling a display screen to display the next-layer interaction information and blurring the main icon and the previous-layer interaction information, wherein the next-layer interaction information is sub-information corresponding to the target sub-icon.
According to one embodiment of the present application, a plurality of the sub-icons are distributed on the periphery of the display area of the previous layer of interaction information.
According to an embodiment of the present application, according to the focal position and the gaze distance, controlling the display screen to switch and display the multi-layer interaction information further includes:
controlling the display screen to keep displaying the previous layer of interaction information and closing a plurality of sub-icons and the next layer of interaction information in response to determining that the focus position deviates from the main icon and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information; and
And in response to determining that the focus position deviates from the target sub-icon and the gaze distance is a preset depth of field corresponding to the next-layer interaction information, controlling the display screen to keep displaying the next-layer interaction information and closing the main icon and the previous-layer interaction information.
An augmented reality device provided according to a second aspect of the present application includes:
the display screen emits an interaction beam carrying interaction information;
a mirror transmitting visible light and reflecting the interaction beam to a human eye;
an eye movement detection member that detects a focus of a line of sight of a user;
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality information interaction method of the first aspect.
According to a third aspect of the present application, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the augmented reality information interaction method according to the first aspect.
According to the augmented reality information interaction method, the augmented reality device and the storage medium, the focus information of the user on the augmented reality object is determined, and the focus position and the gaze distance included in the focus information of the user are utilized to control the display screen to switch and display multi-layer interaction information, so that the response speed is high, the probability of misoperation is low, layered-surface display of the interaction information is realized, and eye fatigue caused by the fact that all interaction information is gathered on the same interaction layer in a short time is avoided. In addition, compared with the prior art, the method for controlling blink frequency and sight stay time is more natural, and does not need a user to control the sight deliberately, so that user experience is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings. The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. In the drawings:
Fig. 1 is a flow diagram of an augmented reality information interaction method according to an embodiment of the application;
fig. 2 is a schematic structural view of an augmented reality device according to an embodiment of the present application;
fig. 3 is a schematic diagram of the working principle of an augmented reality device according to an embodiment of the application;
fig. 4 is a block diagram of an electronic device for implementing the augmented reality information interaction method of an embodiment of the present application.
Reference numerals:
100. an augmented reality device; 110. a display screen; 111. the previous layer of interaction information;
112. the next layer of interaction information; 120. a reflecting mirror; 130. an eye movement detecting member;
140. a wearing part; 150. an eye; 200. an electronic device; 201. a calculation unit;
202. a memory (ROM); 203. a Memory (RAM); 204. a bus;
205. an I/O interface; 206. an input unit; 207. an output unit; 208. a storage unit;
209. and a communication unit.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In addition, embodiments and features of embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 2 shows an exemplary augmented reality device 100 to which an embodiment of the augmented reality information interaction method of the present application can be applied, and fig. 3 shows a schematic diagram of the operation principle of the augmented reality device 100. As shown in fig. 2 and 3, the augmented reality device 100 includes a display screen 110, a mirror 120, an eye movement detector 130, and a processor (not shown); the display screen 110 emits an interaction beam carrying interaction information, the reflecting mirror 120 can transmit visible light and reflect the interaction beam emitted by the display screen 110 to human eyes, the eye movement detection member 130 detects a focus of a line of sight of a user, and the display screen 110 and the eye movement detection member 130 are in communication connection with the processor. The display 110 and eye movement detector 130 may be communicatively coupled to the processor via wired or wireless means, respectively. The augmented reality information interaction method provided by the embodiment of the application is generally executed by a processor.
Fig. 1 shows a flow diagram of one embodiment of an augmented reality information interaction method according to the present application. As shown in fig. 1, the augmented reality information interaction method 1000 includes the following steps:
S100, determining sight line focus information of a user on an augmented reality object, wherein the sight line focus information comprises a focus position and a gaze distance, the augmented reality object comprises a real object and virtual information overlapped with the real object, and the virtual information comprises multiple layers of interaction information corresponding to different depths of field;
and S200, controlling the display screen 110 to switch and display the multi-layer interaction information according to the focus position and the gazing distance.
When the user's line of sight falls on the augmented reality object, the eye movement detector 130 detects the user's line of sight focus. The processor determines the focal position and the gazing distance of the user's sight according to the detection result of the eye movement detecting element 130, and then determines the intention of the user, that is, what layer of interaction information the user wants to view according to the focal position and the gazing distance. At this time, the processor can control the display screen 110 to switch and display the interactive information of different layers, and the interactive light beam emitted by the display screen 110 and conforming to the user's intention is emitted into the eyes 150 of the user through the reflecting mirror 120, so that the user can see the interactive information of the layer which the user wants to view.
Therefore, according to the augmented reality information interaction method, the focus information of the user on the augmented reality object is determined, and the focus position and the gaze distance included in the focus information of the user are utilized to control the display screen 110 to switch and display different layers of interaction information, so that the response speed is high, the probability of misoperation is low, layered display of the interaction information is realized, and eye fatigue caused by the fact that all interaction information is collected on the same interaction layer in a short time is avoided. In addition, compared with the mode of controlling blink frequency and sight stay time adopted in the prior art, the method and the device are more natural in follow-up, and the user does not need to control the sight deliberately, so that user experience is improved.
It should be noted that, in the augmented reality technology, virtual information and the real world are superimposed on the same screen, so that a user can see not only a real object in the real world but also virtual information reflected to the human eye by the display screen 110 through the mirror 120 by the augmented reality device 100. Based on this, each step of the augmented reality information interaction method in the embodiment of the present application is specifically described below.
Step S100
In step S100, gaze focus information of a user on an augmented reality object is determined. As described above, the augmented reality object may include a real object and virtual information superimposed with the real object, so that the user's line of sight may fall on the real object or on the virtual information. Thus, in some embodiments, step S100 may include: line of sight focus information of a user on a real object is determined. In some embodiments, step S100 may include: line of sight focus information of the user on the virtual information is determined.
Step S200
In step S200, the display screen 110 is controlled to switch to display the multi-layer interactive information according to the focus position and the gaze distance. Since the augmented reality object may include a real object and virtual information superimposed with the real object, the processor may control the display screen 110 to switch display of the multi-layered interactive information according to the gaze distance when the user's gaze focus falls in a designated area or in a designated icon included in the virtual information. It should be noted that, for the two situations, the specific method of the processor controlling the display 110 to switch and display the multi-layer interaction information is different, and the multi-layer interaction information including the previous-layer interaction information 111 and the next-layer interaction information 112 is specifically described below by taking as an example:
In the first case, for the case that the focus of the line of sight of the user falls into the designated area, step S200 includes: in response to determining that the focal position is in the preset interaction region and the gaze distance is the predetermined depth of field corresponding to the previous layer of interaction information 111, controlling the display screen 110 to display the previous layer of interaction information 111 and to virtualize the next layer of interaction information 112; in response to determining that the focus position is in the preset interaction region and the gaze distance is the predetermined depth of field corresponding to the next layer of interaction information 112, controlling the display screen 110 to display the next layer of interaction information 112 and blurring the previous layer of interaction information 111; the preset interaction area is a designated area on a real object or a designated area on virtual information; the predetermined depth of field may be a depth at which the interaction information of the corresponding layer is located.
As an example, the preset interactive region is a designated region on the virtual information, and the display region of the previous layer interactive information 111 partially overlaps with the display region of the next layer interactive information 112, and the preset interactive region is a portion where the display region of the previous layer interactive information 111 and the display region of the next layer interactive information 112 overlap with each other. When the focus of the line of sight of the user falls into the preset interaction area, if the processor determines that the gazing distance of the user is the predetermined depth of field corresponding to the previous layer of interaction information 111 according to the detection result of the eye movement detection unit 130, it is indicated that the user wants to look at the previous layer of interaction information 111. Thus, the processor can control the display screen 110 to display the previous interactive information 111 and to blur the next interactive information 112, and the interactive light beam emitted from the display screen 110 is emitted into the eyes 150 of the user through the reflecting mirror 120, so that the user can see the clear previous interactive information 111 and the blurred next interactive information 112. At this time, the upper layer interaction information 111 and the lower layer interaction information 112, which are blurred, are superimposed on the real object that the user sees through the mirror 120. On the contrary, when the focus of the line of sight of the user falls into the preset interaction area, if the processor determines that the gaze distance of the user is the predetermined depth of field corresponding to the next layer of interaction information 112 according to the detection result of the eye movement detection unit 130, it indicates that the user wants to view the next layer of interaction information 112, at this time, the processor can control the display screen 110 to display the next layer of interaction information 112 and to blur the previous layer of interaction information 111, and the interaction light beam emitted from the display screen 110 is emitted into the eyes 150 of the user through the reflector 120, so that the user can see the clear next layer of interaction information 112 and the blurred previous layer of interaction information 111. At this time, the next-layer interaction information 112 and the virtual previous-layer interaction information 111 are superimposed on the real object that the user sees through the mirror 120.
In order to solve the above problem, in this embodiment of the present application, the user may close the currently blurred layer of the interactive information by moving the line of sight, and specifically, step S200 further includes: in response to determining that the focus position deviates from the preset interaction area and the gaze distance is a predetermined depth of field corresponding to the previous layer of interaction information 111, controlling the display screen 110 to keep displaying the previous layer of interaction information 111 and closing the next layer of interaction information 112; in response to determining that the focus position deviates from the preset interaction region and the gaze distance is a predetermined depth of field corresponding to the next layer of interaction information 112, the control display 110 keeps displaying the next layer of interaction information 112 and closes the previous layer of interaction information 111.
In some embodiments, the next layer of interaction information 112 may include at least one piece of sub-information of the previous layer of interaction information 111. In embodiments in which the next-layer interaction information 112 includes a plurality of pieces of sub-information of the previous-layer interaction information 111, the processor control may control the display screen 110 to sequentially display the plurality of pieces of sub-information in a predetermined order.
The following describes an augmented reality information interaction method in the embodiment of the present application by taking a real object as a bonsai, and taking a preset interaction area as a portion where a display area of the previous layer of interaction information 111 and a display area of the next layer of interaction information 112 overlap each other as an example:
Since the upper layer of mutual information 111 and the lower layer of mutual information 112 are located at different layers, the display area of the upper layer of mutual information 111 and the display area of the lower layer of mutual information 112 may partially overlap. Assuming that the display area of the previous layer of interaction information 111 is divided into an a area and a B area, and the display area of the next layer of interaction information 112 is divided into a C area and a D area, the a area and the C area are completely overlapped, the display area of the previous layer of interaction information 111 may be only the a area, or the display area of the next layer of interaction information 112 may be only the C area.
Wherein the previous layer of interaction information 111 may include, but is not limited to, at least one of basic information icons, cultivation mode icons, and purchase link icons of the bonsai, the next layer of interaction information 112 includes at least one of basic information details, cultivation mode details, and purchase link details of the bonsai, the basic information details include at least one of names, families, morphological features, distribution ranges, and growth habits of the bonsai, the cultivation mode details include at least one of cultivation methods, grafting methods, trimming methods, and disease control methods of the bonsai, and the purchase link details include at least one of purchase prices, purchase beard and purchase buttons.
Thus, when the user looks ahead and looks at the front bonsai, if the focus of the user's line of sight falls into the area a, i.e. the area C, and the line of sight distance is the predetermined depth of field corresponding to the previous layer of interaction information 111, the processor controls the display screen 110 to display the previous layer of interaction information 111, i.e. the basic information icon, the cultivation mode icon and the purchasing link icon, and simultaneously virtualizes the next layer of interaction information 112, i.e. the basic information details, the cultivation mode details and the purchasing link details, the interaction light beam carrying the information emitted by the display screen 110 is emitted into the eyes 150 of the user through the reflecting mirror 120, so that the user can see the information. If the user does not want to see the blurred next layer of interaction information 112, the user may shift the line of sight while keeping the gaze distance unchanged, i.e. while the gaze distance is still the predetermined depth of field corresponding to the previous layer of interaction information 111, so that the focal position of the line of sight falls on the B area, the D area or the bonsai, and the processor may control the display screen 110 to close the blurred next layer of interaction information 112. At this time, the user can see only the clear upper interactive information 111. Similarly, when the focus of the user's line of sight falls into the area a, i.e., the area C, and the line of sight distance is a predetermined depth of field corresponding to the next layer of interaction information 112, the processor controls the display screen 110 to sequentially display basic information details, cultivation mode details, and purchase link details of the bonsai in a predetermined order, while blurring the previous layer of interaction information 111. If the user does not want to see the virtual previous layer of interaction information 111, the line of sight may be diverted under the condition that the gaze distance is kept unchanged, that is, the gaze distance is still a predetermined depth of field corresponding to the next layer of interaction information 112, so that the focal position of the line of sight falls on the B area, the D area or the bonsai. At this point, the processor may control the display 110 to close the upper layer of interaction information 111 that is being obscured while maintaining the display of the clear lower layer of interaction information 112. Then, if the user wants to see the previous layer of interaction information 111 again, the focus of the line of sight can be re-dropped into the a region, i.e. the C region, while keeping the gaze distance unchanged, i.e. while the gaze distance is still at the predetermined depth of field corresponding to the next layer of interaction information 112. At this time, the display 110 displays the next interactive information 112 and the virtual interactive information 111 of the previous layer. If the user wants to see the clear previous layer of interaction information 111, the user can see the virtual previous layer of interaction information 111 in the area a, i.e. the area C. At this time, the gaze distance of the user's gaze naturally changes to a predetermined depth of field corresponding to the previous layer of interaction information 111, and the processor controls the display screen 110 to display the previous layer of interaction information 111 and to blur the next layer of interaction information 112. As can be seen, compared to the manner of controlling the blink frequency and the line of sight residence time in the prior art, in the embodiment of the present application, the user may implement the switching display of the previous layer of interaction information 111 and the next layer of interaction information 112 without deliberately controlling the line of sight.
And secondly, aiming at the situation that the focus of the line of sight of the user falls into a designated icon, wherein the virtual information comprises a main icon and a plurality of sub-icons. Thus, step S200 includes: in response to determining that the focus position is at the primary icon and the gaze distance is a predetermined depth of field corresponding to the previous layer of interaction information 111, controlling the display screen 110 to display the previous layer of interaction information 111 and to virtualize the plurality of sub-icons and the next layer of interaction information 112; in response to determining that the focus position is at the target sub-icon and the gaze distance is the predetermined depth of field corresponding to the next-layer interaction information 112, the display screen 110 is controlled to display the next-layer interaction information 112 and to blur the main icon and the previous-layer interaction information 111, wherein the next-layer interaction information 112 is the sub-information corresponding to the target sub-icon, and the target sub-icon is any one sub-icon of all sub-icons. It should be noted that, the plurality of sub-icons may be located at the same layer, and the sub-icon, the main icon, the previous layer interaction information 111, and the next layer interaction information 112 are located at different layers. Of course, the main icon may be at the same level as the previous layer of interaction information 111, and the plurality of sub-icons may be at the same level as the next layer of interaction information 112.
In order to solve the above problem, in this embodiment of the present application, the user may close the currently blurred layer of the interactive information by moving the line of sight, and specifically, step S200 further includes: in response to determining that the focus position deviates from the main icon and the gaze distance is a predetermined depth of field corresponding to the previous layer of interaction information 111, controlling the display screen 110 to keep displaying the previous layer of interaction information 111 and to close the plurality of sub-icons and the next layer of interaction information 112; in response to determining that the focus position deviates from the target sub-icon and the gaze distance is a predetermined depth of field corresponding to the next layer of interaction information 112, the control display 110 keeps displaying the next layer of interaction information 112 and closes the main icon and the previous layer of interaction information 111.
In order to facilitate the user to place the focus of the line of sight on the corresponding sub-icon, a plurality of sub-icons are distributed on the periphery of the display area of the upper layer of interaction information 111. For example, all sub-icons are spaced around the display area of the previous layer of interaction information 111.
The following description will be made of a method for implementing information interaction in a virtual manner in the embodiment of the present application, taking a display object as a bonsai as an example:
wherein the plurality of sub-icons includes at least one of a basic information icon, an incubation mode icon, and a purchase link icon. In this case, the next layer interaction information 112 may be basic information details corresponding to the basic information icon, incubation mode details corresponding to the incubation mode icon, or purchase link details corresponding to the purchase link icon. Wherein the basic information details include at least one of names, families, morphological characteristics, distribution ranges and growth habits of the bonsai, the cultivation mode details include at least one of cultivation methods, grafting methods, pruning methods and disease control methods of the bonsai, and the purchase link details include at least one of purchase prices, purchase beard and know and purchase buttons.
Thus, when the user looks ahead to see the front bonsai, if the focus of the user's sight is at the main icon and the gazing distance is at the predetermined depth of field corresponding to the previous layer of interaction information 111, the processor controls the display screen 110 to display the previous layer of interaction information 111 and to virtualize the basic information icon, the cultivation mode icon, the purchase link icon and the next layer of interaction information 112. If the user does not want to see the virtual basic information icon, the cultivation mode icon, the purchase link icon, and the next interactive information 112, the user can shift the line of sight while keeping the viewing distance unchanged, that is, while keeping the viewing distance still at the predetermined depth of view corresponding to the previous interactive information 111, so that the focal position of the line of sight deviates from the main icon, and the processor can control the display screen 110 to close the virtual basic information icon, the cultivation mode icon, the purchase link icon, and the next interactive information 112. At this time, the user can only see the clear upper layer of interaction information 111 and the main icon. Similarly, when the focus of the user's line of sight is at a target sub-icon, such as a basic information icon, and the line of sight distance is a predetermined depth of field corresponding to the next layer of interaction information 112, the processor controls the display screen 110 to display basic information details and simultaneously virtualize the previous layer of interaction information 111 and the main icon. At this time, the interactive light beam with basic information details emitted from the display 110 is reflected to the eyes 150 of the user by the reflecting mirror 120, so that the user can see the related contents such as the name, family, morphological characteristics, distribution range, growth habit and the like of the bonsai. Of course, in the case where the viewing distance is kept unchanged, that is, in the case where the viewing distance is still a predetermined depth of field corresponding to the next layer of interaction information 112, when the focus of the user's line of sight is at the cultivation mode icon or the purchase link icon, the processor controls the display screen 110 to display the cultivation mode details or the purchase link details, and simultaneously virtualizes the previous layer of interaction information 111 and the main icon. If the user does not want to see the virtual upper interactive information 111 and the main icon, the line of sight can be shifted with the viewing distance maintained so that the focus position of the line of sight deviates from the basic information icon, the cultivation mode icon, and the purchase link icon. The processor may then control the display 110 to close the upper layer of interaction information 111 and the primary icon that are obscured.
Further, if the display 110 displays only the next interactive information 112 and the basic information icon, the cultivation icon, and the purchase link icon, and the user wants to review the previous interactive information 111 and the main icon, the user can review any one of the basic information icon, the cultivation icon, and the purchase link icon while keeping the viewing distance unchanged, that is, while keeping the viewing distance at a predetermined depth of view corresponding to the next interactive information 112. At this time, the display screen 110 displays the next-layer interactive information 112, and basic information icons, cultivation mode icons, and purchase link icons, and also displays the upper-layer interactive information 111 and the main icons that are virtual. If the user wants to see the clear previous layer of interaction information 111, the user can see the main icon and make the fixation distance be a predetermined depth of field corresponding to the previous layer of interaction information 111. At this time, the processor controls the display screen 110 to display the upper layer interactive information 111 and the main icon, and the virtual next layer interactive information 112, the basic information icon, the cultivation mode icon, and the purchase link icon.
It should be noted that, the previous layer of interaction information 111 and the next layer of interaction information 112 refer to any two different layers of interaction information, and for those skilled in the art, the method for interaction of augmented reality information of the present application may be used to realize switching display between three or more layers of interaction information without departing from the concept of the present application. For example, when the switching display of the third layer interaction information and the fourth layer interaction information is implemented, the third layer interaction information corresponds to the previous layer interaction information 111, and the fourth layer interaction information corresponds to the next layer interaction information 112.
The present embodiments also provide an augmented reality device 100, the augmented reality device 100 comprising a display screen 110, a mirror 120, an eye movement detector 130, at least one processor, and a memory communicatively connected to the at least one processor; the display screen 110 emits an interaction beam carrying interaction information, the mirror 120 transmits visible light and reflects the interaction beam to human eyes, the eye movement detector 130 detects a focus of a line of sight of a user, and the memory stores instructions executable by the at least one processor, so that the at least one processor can perform the augmented reality information interaction method.
In some embodiments, eye movement detector 130 includes an infrared camera and a plurality of infrared light sources. The reflecting mirror 120 and the display screen 110 are disposed opposite to each other, and the plurality of infrared light sources may be all fixed to the edge of the reflecting mirror 120 or all fixed to the edge of the display screen 110. It is of course also possible to fix a part of the infrared light source to the edge of the mirror 120 and the rest of the infrared light source to the edge of the display screen 110. Similarly, the infrared camera may be connected to the edge of the mirror 120 or the edge of the display screen 110 by a fixing member.
In the embodiment in which all the infrared light sources are fixed to the edge of the reflecting mirror 120 and the infrared camera is connected to the edge of the reflecting mirror 120 through the fixing member, the infrared light emitted by the infrared light sources is directly emitted to the eyes 150 of the user, the infrared light reflected by the eyes 150 of the user carries information related to eye movement and is received by the infrared camera, and the infrared camera can detect the focus of the user's vision according to the received infrared light and transmit the detection result to the processor. The interaction light beam carrying the interaction information emitted by the display screen 110 is directly projected to the reflecting mirror 120, the interaction light beam enters the eyes 150 of the user after being reflected by the reflecting mirror 120, and external visible light can directly enter the eyes of the human through the reflecting mirror 120. At this time, the user sees the augmented reality object, that is, the external real object, and the interactive information superimposed on the real object. The processor determines the sight focus information of the user on the augmented reality object according to the received detection result, and controls the display screen 110 to switch and display interaction information of different layers according to the focus position and the gazing distance included in the sight focus information.
In the embodiment in which all the infrared light sources are fixed to the edge of the display screen 110 and the infrared camera is connected to the edge of the display screen 110 through the fixing piece, the infrared light emitted by the infrared light sources is directly projected to the reflecting mirror 120, the infrared light is reflected by the reflecting mirror 120 and then is directed to the eyes 150 of the user, the infrared light reflected by the eyes 150 of the user carries information related to eye movement, and then is projected to the reflecting mirror 120 again, and finally is received by the infrared camera after being reflected by the reflecting mirror 120. The interaction beam carrying the interaction information emitted by the display screen 110 is also directly projected to the mirror 120, and the interaction beam enters the eyes 150 of the user after being reflected by the mirror 120.
In some embodiments, the augmented reality device 100 further includes a wear 140, the display screen 110 and the mirror 120 being secured to the wear 140, respectively. Wherein the wear 140 may be, but is not limited to, an augmented reality helmet or strap and the mirror 120 may be, but is not limited to, a freeform mirror.
The embodiment of the application also provides an electronic device, which comprises at least one processor and a memory in communication connection with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality information interaction method.
The embodiment of the application also provides a computer readable storage medium storing a computer program, which realizes the augmented reality information interaction method when being executed by a processor.
Fig. 4 shows a schematic block diagram of an example electronic device 200 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers. The electronic device may also represent various forms of mobile devices capable of running a computing program. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the electronic device 200 includes a computing unit 201 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 202 or a computer program loaded from a storage unit 208 into a Random Access Memory (RAM) 203. In the RAM203, various programs and data required for the operation of the electronic apparatus 200 can also be stored. The computing unit 201, ROM 202, and RAM203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
Various components in the electronic device 200 are connected to the I/O interface 205, including: an input unit 206 such as a key or the like; an output unit 207 such as various types of speakers and the like; a storage unit 208 such as a magnetic disk or the like; and a communication unit 209 such as a network card, modem, wireless communication transceiver, etc. The communication unit 209 allows the electronic device 200 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 201 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of computing unit 201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 201 performs the various methods and processes described above, such as the augmented reality information interaction method. For example, in some embodiments, the augmented reality information interaction method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 200 via the ROM 202 and/or the communication unit 209. When the computer program is loaded into the RAM203 and executed by the computing unit 201, one or more steps of the augmented reality information interaction method described above may be performed. Alternatively, in other embodiments, the computing unit 201 may be configured to perform the augmented reality information interaction method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. An augmented reality information interaction method is characterized by comprising the following steps:
determining sight focus information of a user on an augmented reality object, wherein the sight focus information comprises a focus position and a gaze distance, the augmented reality object comprises a reality object and virtual information overlapped with the reality object, and the virtual information comprises multiple layers of interaction information corresponding to different depths of field; and
and controlling the display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance, wherein the multi-layer interaction information comprises the upper layer interaction information and the lower layer interaction information:
controlling the display screen to display the previous layer of interaction information and blurring the next layer of interaction information in response to the fact that the focus position is in a preset interaction area and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information;
Responding to the fact that the focus position deviates from the preset interaction area and the gazing distance is a preset depth of field corresponding to the previous layer of interaction information, controlling the display screen to keep displaying the previous layer of interaction information and closing the next layer of interaction information;
controlling the display screen to display the next-layer interaction information and blurring the previous-layer interaction information in response to the fact that the focus position is determined to be in the preset interaction area and the gaze distance is a preset depth of field corresponding to the next-layer interaction information; and
controlling the display screen to keep displaying the next-layer interaction information and closing the previous-layer interaction information in response to the fact that the focus position deviates from the preset interaction area and the gazing distance is a preset depth of field corresponding to the next-layer interaction information;
the preset interaction area is a specified area on the real object or a specified area on the virtual information.
2. The augmented reality information interaction method of claim 1, wherein determining gaze focus information of a user on an augmented reality object comprises:
determining the gaze focus information of a user on the real object; or alternatively
And determining the sight focus information of the user on the virtual information.
3. The augmented reality information interaction method according to claim 1, wherein the display area of the previous layer of interaction information partially overlaps the display area of the next layer of interaction information, and the preset interaction area is a portion where the display area of the previous layer of interaction information and the display area of the next layer of interaction information overlap each other.
4. The augmented reality information interaction method according to claim 1, wherein the next-layer interaction information includes at least one piece of sub-information of the previous-layer interaction information.
5. The augmented reality information interaction method of claim 4, wherein the next layer interaction information includes a plurality of pieces of the sub information;
wherein, control the display screen to display the next layer of interaction information and virtualize the previous layer of interaction information, include:
and controlling the display screen to sequentially display a plurality of pieces of sub-information according to a preset sequence, and blurring the interaction information of the previous layer.
6. The augmented reality information interaction method of claim 2, wherein the virtual information further comprises a main icon and a plurality of sub icons;
And controlling a display screen to switch and display the multi-layer interaction information according to the focus position and the gazing distance, wherein the method comprises the following steps:
controlling the display screen to display the previous layer of interaction information and blurring a plurality of sub-icons and the next layer of interaction information in response to determining that the focus position is in the main icon and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information; and
and in response to determining that the focus position is in a target sub-icon and the gaze distance is a preset depth of field corresponding to the next-layer interaction information, controlling a display screen to display the next-layer interaction information and blurring the main icon and the previous-layer interaction information, wherein the next-layer interaction information is sub-information corresponding to the target sub-icon.
7. The augmented reality information interaction method according to claim 6, wherein a plurality of the sub-icons are distributed at a periphery of a display region of the previous layer of interaction information.
8. The augmented reality information interaction method of claim 6, wherein controlling a display screen to switch to display the multi-layer interaction information according to the focus position and the gaze distance, further comprises:
Controlling the display screen to keep displaying the previous layer of interaction information and closing a plurality of sub-icons and the next layer of interaction information in response to determining that the focus position deviates from the main icon and the gaze distance is a preset depth of field corresponding to the previous layer of interaction information; and
and in response to determining that the focus position deviates from the target sub-icon and the gaze distance is a preset depth of field corresponding to the next-layer interaction information, controlling the display screen to keep displaying the next-layer interaction information and closing the main icon and the previous-layer interaction information.
9. An augmented reality device, comprising:
the display screen emits an interaction beam carrying interaction information;
a mirror transmitting visible light and reflecting the interaction beam to a human eye;
an eye movement detection member that detects a focus of a line of sight of a user;
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality information interaction method of any one of claims 1 to 8.
10. A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the augmented reality information interaction method according to any one of claims 1 to 8.
CN202211380357.8A 2022-11-04 2022-11-04 Augmented reality information interaction method, augmented reality device, and storage medium Active CN115562497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211380357.8A CN115562497B (en) 2022-11-04 2022-11-04 Augmented reality information interaction method, augmented reality device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211380357.8A CN115562497B (en) 2022-11-04 2022-11-04 Augmented reality information interaction method, augmented reality device, and storage medium

Publications (2)

Publication Number Publication Date
CN115562497A CN115562497A (en) 2023-01-03
CN115562497B true CN115562497B (en) 2024-04-05

Family

ID=84767941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211380357.8A Active CN115562497B (en) 2022-11-04 2022-11-04 Augmented reality information interaction method, augmented reality device, and storage medium

Country Status (1)

Country Link
CN (1) CN115562497B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755587B (en) * 2023-08-11 2023-12-19 之江实验室 Augmented reality method, device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111052042A (en) * 2017-09-29 2020-04-21 苹果公司 Gaze-based user interaction
CN113325947A (en) * 2020-02-28 2021-08-31 北京七鑫易维信息技术有限公司 Display method, display device, terminal equipment and storage medium
CN115202475A (en) * 2022-06-30 2022-10-18 江西晶浩光学有限公司 Display method, display device, electronic equipment and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3141985A1 (en) * 2015-09-10 2017-03-15 Alcatel Lucent A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
KR20200021670A (en) * 2018-08-21 2020-03-02 삼성전자주식회사 Wearable device and controlling method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111052042A (en) * 2017-09-29 2020-04-21 苹果公司 Gaze-based user interaction
CN113325947A (en) * 2020-02-28 2021-08-31 北京七鑫易维信息技术有限公司 Display method, display device, terminal equipment and storage medium
WO2021169853A1 (en) * 2020-02-28 2021-09-02 北京七鑫易维信息技术有限公司 Display method and apparatus, and terminal device and storage medium
CN115202475A (en) * 2022-06-30 2022-10-18 江西晶浩光学有限公司 Display method, display device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN115562497A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
US9761057B2 (en) Indicating out-of-view augmented reality images
US10948983B2 (en) System and method for utilizing gaze tracking and focal point tracking
US20190279407A1 (en) System and method for augmented reality interaction
US20180246635A1 (en) Generating user interfaces combining foreground and background of an image with user interface elements
KR102350300B1 (en) Gaze swipe selection
EP3008567B1 (en) User focus controlled graphical user interface using an head mounted device
US20160070439A1 (en) Electronic commerce using augmented reality glasses and a smart watch
KR20200110771A (en) Augmented Reality Content Adjustment Method and Device
US20130335301A1 (en) Wearable Computer with Nearby Object Response
CN107003521A (en) The display visibility assembled based on eyes
CN115562497B (en) Augmented reality information interaction method, augmented reality device, and storage medium
CN109643469B (en) Structured content for augmented reality rendering
JP2021096490A (en) Information processing device, information processing method, and program
KR20190101827A (en) Electronic apparatus for providing second content associated with first content displayed through display according to motion of external object, and operating method thereof
EP3805900A1 (en) Wearable device and control method therefor
US11836978B2 (en) Related information output device
US20230305635A1 (en) Augmented reality device, and method for controlling augmented reality device
CN115793848B (en) Virtual reality information interaction method, virtual reality device and storage medium
WO2023049244A1 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US9153043B1 (en) Systems and methods for providing a user interface in a field of view of a media item
JP7339837B2 (en) Display device and display method
EP3510440B1 (en) Electronic device and operation method thereof
JP7274451B2 (en) System, management device, program, and management method
JP7139395B2 (en) Controllers, programs and systems
WO2023026798A1 (en) Display control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant