CN112950791A - Display method and related device - Google Patents

Display method and related device Download PDF

Info

Publication number
CN112950791A
CN112950791A CN202110377394.2A CN202110377394A CN112950791A CN 112950791 A CN112950791 A CN 112950791A CN 202110377394 A CN202110377394 A CN 202110377394A CN 112950791 A CN112950791 A CN 112950791A
Authority
CN
China
Prior art keywords
layer
region
voltage
electrode layer
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110377394.2A
Other languages
Chinese (zh)
Inventor
林明田
周伟彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110377394.2A priority Critical patent/CN112950791A/en
Publication of CN112950791A publication Critical patent/CN112950791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the application discloses a display method and a related device, which at least relate to machine learning in artificial intelligence. The AR device projects a virtual object on a projection layer, in order to avoid the phenomenon that the visual effect of the virtual object is difficult to match with the real environment due to the fact that ambient light and shadow on the virtual object are overlapped, a projection area of the virtual object on the projection layer is determined, a first area corresponding to the projection area is determined in a shielding layer according to the position corresponding relation of the projection layer and the shielding layer, when the virtual object is projected by the enhanced display device, the state of the first area is switched to a shielding state from a light transmitting state, and a second area except the first area in the shielding layer is kept to be in the light transmitting state. Therefore, on the premise that the user does not influence the checking of the objects in the real scene, the display effect and the sense of reality of the virtual object are improved, and the use experience of the user is improved.

Description

Display method and related device
Technical Field
The present application relates to the field of augmented reality, and in particular, to a display method and related apparatus.
Background
An Augmented Reality (AR) technology is a technology for skillfully fusing virtual information and a real world, and a user can observe a virtual object projected based on the AR technology, such as a virtual animation image, a virtual indicator and the like, in a real scene through an AR device. From the perspective of a user through the AR device, the virtual object projected by the AR technology should be as if it were actually in a real scene, thereby bringing a completely new visual and interactive experience to the user.
However, the virtual object actually belongs to the projection of the AR device on the display interface of the user, and when the user observes through the AR device, the virtual object and the real scene generate light and shadow overlapping, so that the degree of fusion between the virtual object observed by the user through the AR device and the real scene is not high, and the reality is lacked.
Disclosure of Invention
In order to solve the technical problem, the application provides a display method and a related device, which are used for improving the fusion degree of a real scene and a virtual object and improving the sense of reality when a user observes through an AR device.
The embodiment of the application discloses the following technical scheme:
in one aspect, the present application provides a display method, including:
determining a projection region of a virtual object on a projection layer, the projection layer being disposed in a display component of an augmented reality device, the display component further including an occlusion layer disposed overlapping the projection layer in a display direction, the occlusion layer being behind the projection layer in the display direction;
determining a first area of the shielding layer corresponding to the projection area according to the position corresponding relation between the projection layer and the shielding layer;
and switching the state of the first area from a light-transmitting state to a shielding state, and keeping a second area except the first area in the shielding layer in the light-transmitting state, wherein the area in the light-transmitting state in the shielding layer does not obstruct the transmission of ambient light, and the area in the shielding state in the shielding layer obstructs the transmission of the ambient light.
In another aspect, the present application provides an augmented reality device, the device including a display component, a projection component, and a processing component, the display component including a projection layer and an occlusion layer disposed to overlap in a display direction, the occlusion layer being behind the projection layer in the display direction;
the projection component is used for projecting a virtual object on the projection layer;
the processing component is used for determining a first region of the projection region corresponding to the shielding layer according to the position corresponding relation between the projection layer and the shielding layer when the projection region of the virtual object on the projection layer is determined; and switching the state of the first area from a light-transmitting state to a shielding state, and keeping a second area except the first area in the shielding layer in the light-transmitting state, wherein the area in the light-transmitting state in the shielding layer does not obstruct the transmission of ambient light, and the area in the shielding state in the shielding layer obstructs the transmission of the ambient light.
In another aspect, the present application provides a computer device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of the above aspect according to instructions in the program code.
In another aspect, the present application provides a computer-readable storage medium for storing a computer program for executing the method of the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of the above aspect.
According to the technical scheme, the display assembly of the AR device comprises the projection layer and the shielding layer which are arranged in an overlapped mode in the display direction, and the shielding layer is arranged behind the projection layer in the display direction. The AR device projects a virtual object on a projection layer to achieve a visual effect of the virtual object in a real scene, in order to avoid that the visual effect of the virtual object is difficult to match a real environment due to overlapping of ambient light and shadow on the virtual object, a projection area of the virtual object on the projection layer needs to be determined, a first area corresponding to the projection area is determined in a shielding layer according to a position corresponding relation between the projection layer and the shielding layer, when the virtual object is projected by an enhanced display device, the state of the first area is switched from a transparent state to a shielding state, and a second area except the first area in the shielding layer is kept in the transparent state. Therefore, when a user uses the AR equipment, the shielding layer is far away from the user relative to the projection layer, when the virtual object is projected, the first area switched to the shielding state in the shielding layer can prevent the ambient light from transmitting through the projection area, the normal display of the virtual object on the user can not be influenced, the display quality of the virtual object observed by the user is more real, and the abnormal transparency and distortion are not easy to occur due to the superposition influence of the ambient light serving as the background in a real scene. And the second area in the shielding layer still keeps a light-transmitting state when the virtual object is projected, so that the transmission of ambient light is not hindered, and therefore, a user can normally observe the real environment around the virtual object to obtain the AR visual experience of the virtual object in the real environment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a display method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an AR device according to an embodiment of the present disclosure;
fig. 3 is a schematic view of AR glasses according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a display method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a display link of an AR device displaying a virtual object;
FIG. 6 is a diagram illustrating a user viewing a virtual object using an AR device of the related art;
FIG. 7 is a schematic diagram of a display link of an AR device displaying a virtual object;
FIG. 8a is a schematic diagram of an optical display architecture according to an embodiment of the present application;
FIG. 8b is a schematic diagram of an optical display architecture according to an embodiment of the present application;
FIG. 8c is a schematic diagram of an optical display architecture according to an embodiment of the present application;
FIG. 8d is a schematic diagram of an optical display architecture according to an embodiment of the present application;
FIG. 8e is a schematic diagram of an optical display architecture according to an embodiment of the present application;
fig. 9 is a schematic diagram of a display link of an AR device displaying a virtual object according to an embodiment of the present application;
fig. 10 is a schematic view of an electrochromic layer provided in an embodiment of the present application;
fig. 11 is a schematic diagram illustrating an operating principle of an electrochromic grating module according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram illustrating an operating principle of an electrochromic grating module according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating an operation principle of an electrochromic grating module according to an embodiment of the present disclosure;
fig. 14 is an equivalent circuit diagram of an electrochromic grating module corresponding to a display pixel according to an embodiment of the disclosure;
fig. 15 is an equivalent circuit diagram of an electrochromic grating module corresponding to a display pixel according to an embodiment of the present application;
fig. 16 is a schematic view of an electrochromic layer provided in an embodiment of the present application;
fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
With the development of scientific technology, AR devices are becoming more and more popular and are beginning to appear gradually in people's actual work and life, but compared with Virtual Reality (VR) devices, AR devices have a poor display effect, for example, Virtual objects have unsatisfactory effects in the aspects of display color contrast and color saturation. Meanwhile, in an actual use environment, due to the change and complexity of a use scene, the color cast problem still occurs in the display effect of the virtual object which is unavoidable, so that the fusion degree of the virtual object and the real scene is low, the reality is lacked, and the use experience of a user is influenced.
Based on this, the embodiment of the application provides a display method and a related device, which are used for improving the sense of reality of a virtual object in a real scene and improving the use experience of a user.
The display method provided by the embodiment of the application is realized based on Artificial Intelligence (AI), which is a theory, method, technology and application system for simulating, extending and expanding human Intelligence by using a digital computer or a machine controlled by the digital computer, sensing the environment, acquiring knowledge and obtaining the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the present application, the artificial intelligence software technology mainly involved includes the above-mentioned computer vision technology and machine learning/deep learning directions.
The display method provided by the application can be applied to display equipment with data processing capacity, such as terminal equipment and servers. The terminal device may be a smart phone, a desktop computer, a notebook computer, a tablet computer, a smart sound box, a smart watch, an AR device, and the like, but is not limited thereto; the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The display device may be capable of computer vision techniques. Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronized positioning and mapping, among other techniques.
The display device may be provided with machine learning capabilities. Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
In the display method provided by the embodiment of the application, the adopted artificial intelligence model mainly relates to the application of the computer vision technology, and the fusion of the virtual object and the real scene is realized through the related technologies such as 3D, VR, AR and the like in the computer vision technology.
In order to facilitate understanding of the technical solution of the present application, the following describes a display method provided in the embodiments of the present application with a terminal as a display device in combination with an actual application scenario.
Referring to fig. 1, the figure is a schematic view of an application scenario of a display method provided in an embodiment of the present application. In the application scenario shown in fig. 1, the terminal device 100 is AR glasses (a kind of AR device), and the user can see a virtual object merged with the real scenario by wearing the AR glasses.
The display component of the AR glasses is a lens, and the lens includes a projection layer 110 and a shielding layer 120. The projection layer 110 and the shielding layer 120 are disposed to overlap in the display direction, and the shielding layer 120 is behind the projection layer 110 in the display direction. As shown in fig. 1, both the projection layer 110 and the obscuring layer 120 are located on the lens, with the projection layer 110 being closer to the user than the obscuring layer 120.
The AR glasses project the virtual object on the projection layer 110, and the user can view not only the virtual object but also the real environment through the AR glasses, thereby realizing the visual effect that the virtual object is in the real scene.
Virtual objects in a real scene may be affected by ambient light, for example, when the background of the virtual object appears dark, the color of the virtual object may be darker. In order to avoid that the visual effect of the virtual object is difficult to match the real environment due to the overlapping of the ambient light and the light and shadow on the virtual object, it is necessary to determine the projection area of the virtual object on the projection layer 110, and determine the first area corresponding to the projection area in the shielding layer 120 according to the corresponding relationship between the positions of the projection layer 110 and the shielding layer 120.
In the application scenario shown in fig. 1, the virtual object is in a "cross" shape pattern, the projection area of the virtual object projected onto the projection layer 110 is in a "cross" shape pattern, and the first area corresponding to the projection area is also in a "cross" shape pattern in the shielding layer 120.
When the AR glasses project the virtual object, the state of the first area is switched from a light-transmitting state to a shielding state, and the second area except the first area in the shielding layer is kept in the light-transmitting state. As shown in fig. 1, in the shielding layer 120, a first region does not allow ambient light to pass through, and a second region allows ambient light to pass through. At this time, the ambient light does not enter the projection layer 110 through the shielding layer 120, and for the virtual object projected on the projection area, the virtual object is not affected by the ambient light, for example, the virtual object is not easily overlapped and affected by the ambient light as a background in the real scene, so that the situations of abnormal transparency, distortion and the like occur, so that the normal display of the virtual object on the user is not affected, and the display quality of the virtual object observed by the user is more real.
Meanwhile, the second area in the shielding layer 120 is still in a light-transmitting state when the virtual object is projected, and the transmission of ambient light is not hindered, so that a user can normally observe the real environment around the virtual object to obtain the AR visual experience of the virtual object in the real environment, the virtual object and the real environment are better fused, the sense of reality of the virtual object in the real scene is improved, and the user experience is improved.
A display method provided in an embodiment of the present application is described below with reference to the accompanying drawings, where a terminal device is used as a display device.
Referring to fig. 2, which is a schematic diagram of an AR device provided in an embodiment of the present application, the AR device 200 includes a display component 210, a projection component 220, and a processing component 230. The display module includes a projection layer 211 and a shielding layer 212 that are disposed in an overlapping manner in the display direction, and the shielding layer 212 may cover a display range of the virtual object on the projection layer after the projection layer 211 in the display direction.
Taking AR glasses as an example, refer to fig. 3, which is a schematic diagram of AR glasses provided in an embodiment of the present application. The AR glasses 200 belong to an AR device, the display component 210 of the AR glasses 200 is a lens portion, the projection component 220 may be a portion of a component located on a glasses frame, the processing component 230 may be a portion of a component located on the glasses frame, the projection component 220 may be integrated with the processing component 230 or separately provided, and the configuration of the projection component and the processing component is not particularly limited in the present application. Both the projection layer 211 and the obscuring layer 212 overlie the lens, with the projection layer 211 being closer to the user than the obscuring layer 212.
Therein, the projection component 220 is used to project a virtual object on the projection layer 211, such as a virtual object on a lens. The processing component 230 is configured to determine a projection area of the virtual object on the projection layer 211, so as to control the shielding layer 212 to adjust a state of a first area corresponding to the projection area to a shielding state when the virtual object is projected, and the processing component 230 is described below with reference to fig. 4.
Referring to fig. 4, the figure is a flowchart of a display method provided in an embodiment of the present application. As shown in fig. 4, the display method includes the steps of: will be provided with
S401: a projection area of the virtual object on the projection layer is determined.
The AR device in the related art is affected by the ambient light when projecting the virtual object, and the reason why the virtual object is affected by the ambient light will be described in principle below.
Referring to fig. 5, the figure is a schematic diagram of a display link for displaying a virtual object by an AR device. When the user uses the AR equipment, the real environment must be superposed, and the ambient light in the real environment and the light projected by the AR equipment to the virtual object are superposed and emitted to human eyes together, so that the influence of the ambient light in the real environment on the virtual object cannot be avoided, and the ambient light in the real environment can be adversely affected by the imaging of the virtual object.
For example, see fig. 6, which is a schematic diagram of a user observing a virtual object using an AR device in the related art. In fig. 6, the user uses AR glasses, and the virtual object projected by the AR glasses is a cuboid small person, the color of the cuboid small person is white, and the background environment in which the cuboid small person is located is a triangular background with uneven color. The cuboid person observed by human eyes can be influenced by the ambient light of the dark color background under the influence of the ambient light of the triangular background, if the cuboid person in the figure 6 is divided into a plurality of colors with different depths, the color of some regions is deepened, and even the deepening degrees of different regions are different, so that the display effect of the cuboid person is very false, the sense of reality is lacked, and the use experience of a user is influenced.
Based on this, in order to reduce the influence of ambient light on the virtual object, the related art proposes a scheme of increasing the display brightness of the AR device and a scheme of reducing the light transmittance of the AR device, which are separately described below.
The first scheme is as follows: the display brightness of the AR device is increased.
The intensity of the light emitted by the display device may be increased in order to bring about a better color contrast and color saturation for the user. Referring to fig. 7, the figure is a schematic diagram of a display link for displaying a virtual object by an AR device. Through the intensity of ambient light intensity sensor real-time measurement, perhaps according to the intensity of ambient light under the conventional service condition, suitably increase the intensity of the required light of AR equipment transmission display virtual object, promote the luminance proportion of virtual object in the light that reaches user's eyes, through the virtual object of outstanding demonstration in real scene, make the user have better visual experience.
Scheme II: reducing the light transmittance of the AR device.
In order to reduce the influence of the ambient light on the display effect of the virtual object, the light transmittance of the ambient light of the lens in the AR equipment can be reduced, and after the incident of the ambient light is reduced as much as possible, the influence of the ambient light on the virtual object is weakened, so that a user has better visual experience.
However, in the first scheme, by increasing the display brightness of the AR device, the power consumption of the AR device is significantly increased, a corresponding battery capacity and a corresponding heat dissipation scheme need to be increased, the size of the AR device needs to be further increased, the weight of the user wearing the AR device is large, and the use experience of the AR device is significantly reduced. Scheme two is through the light transmissivity who reduces lens in the AR equipment, though fine promotion display effect, nevertheless because whole luminousness is lower, the user hardly sees real environment clearly, very big influence user's reality interactive operation experience. In addition, both the first scheme and the second scheme aggravate the unreality of the virtual object, and cannot bring better use experience to users.
Based on this, this application keeps the luminousness that other non-display virtual object region corresponds through reducing the luminousness that the region that shows the virtual object corresponds, realizes the luminousness that the region that dynamic adjustment shows the virtual object corresponds to under the prerequisite that does not influence the user and look over the object in the real scene, promoted the display effect of virtual object, promoted the sense of reality of virtual object, improve user's use and experience.
Thus, in order to adjust the light transmittance corresponding to the region where the virtual object is displayed, the projection region of the virtual object on the projection layer is determined.
The projection component may project the virtual object on the projection layer by using different optical display architecture schemes, for example, a prism scheme as shown in fig. 8a, a birdbath (birdbath) scheme as shown in fig. 8b, a free-form surface scheme as shown in fig. 8c, an off-axis holographic lens scheme as shown in fig. 8d, and a light guide (Lightguide) scheme as shown in fig. 8e, according to which the projection area of the virtual object on the projection layer can be determined.
S402: and determining a first area of the projection area corresponding to the shielding layer according to the position corresponding relation between the projection layer and the shielding layer.
As can be seen from the foregoing, the projection layer and the shielding layer are different layers in the display module, the projection layer is used for displaying the virtual object, and the shielding layer is used for controlling the transmittance of the ambient light. In order to reduce the influence of the ambient light on the virtual object, the transmittance corresponding to the projection area where the virtual object is located in the projection layer is reduced, so that the first area corresponding to the projection area can be determined in the shielding layer according to the position corresponding relationship between the projection layer and the shielding layer, so as to reduce the transmittance of the first area.
S403: the state of the first region is switched from a light-transmitting state to a shielding state, and a second region except the first region in the shielding layer is kept in the light-transmitting state.
In the shielding layer, the area in the light transmission state does not block the transmission of the ambient light, the area in the shielding state blocks the transmission of the ambient light, after the first area is determined, the state of the first area is switched from the light transmission state to the shielding state, and the ambient light is blocked from transmitting the shielding layer, so that the light transmittance of the first area is reduced. In the display direction, the first region is located behind the projection region, the light transmittance of the first region is reduced, and the ambient light entering the projection layer can be reduced, so that the conditions of abnormal transparency, distortion and the like of the virtual object due to the overlapping influence of the ambient light are reduced, the contrast and the color saturation of the virtual object can be obviously improved, and the display quality of the virtual object observed by a user is more real. Moreover, the size and the power consumption of the AR equipment cannot be obviously increased, and the portability, the man-machine friendliness degree and the like of the AR equipment are greatly improved compared with those of the first scheme.
The embodiment of the present application does not specifically limit the degree of obstruction, for example, the light transmittance of the first region may be properly reduced, so that the ratio of the ambient light entering the first region is reduced, the intensity of the ambient light is reduced, and the influence of the ambient light on the virtual object is reduced. For another example, the light transmittance of the first region is changed to 0, and the ambient light is not allowed to enter the first region, so that the ambient light does not affect the virtual object.
The following description will be given taking an example where the light transmittance of the first area is changed to 0, and refer to fig. 9, which is a schematic diagram of a display link of the AR device displaying a virtual object according to the embodiment of the present application. In the first area, the shielding layer can block all ambient light from entering, so that the virtual object in the projection area cannot be superposed with the ambient light in the real environment, and human eyes can only observe the light of the virtual object projected by the AR equipment. Therefore, the influence of ambient light in the real environment on the virtual object is avoided, the display of the virtual object is more materialized and more like an object existing in the real environment, the virtual object and the real environment are better fused, and the interaction experience of a user is improved.
In shielding the layer, not only switch the state of first region into shielding the state, still will shield the second region in the layer except that first region and keep for the printing opacity state, thereby can not influence ambient light and get into the second region, make the observation real environment that the user can be clear, compare in aforementioned scheme two, this application is not whole to reduce the luminousness, but only reduces the luminousness of first region, the maximize has remain the luminousness that the region of non-display virtual object corresponds, fine assurance AR equipment is experienced and the ability with user's interaction under real environment, can both satisfy virtual object's display effect and still satisfy the user can be more clear see the dual demand of real environment clearly.
According to the technical scheme, the display assembly of the AR device comprises the projection layer and the shielding layer which are arranged in an overlapped mode in the display direction, and the shielding layer is arranged behind the projection layer in the display direction. The AR device projects a virtual object on a projection layer to achieve a visual effect of the virtual object in a real scene, in order to avoid that the visual effect of the virtual object is difficult to match a real environment due to overlapping of ambient light and shadow on the virtual object, a projection area of the virtual object on the projection layer needs to be determined, a first area corresponding to the projection area is determined in a shielding layer according to a position corresponding relation between the projection layer and the shielding layer, when the virtual object is projected by an enhanced display device, the state of the first area is switched from a transparent state to a shielding state, and a second area except the first area in the shielding layer is kept in the transparent state. Therefore, when a user uses the AR equipment, the shielding layer is far away from the user relative to the projection layer, when the virtual object is projected, the first area switched to the shielding state in the shielding layer can prevent the ambient light from transmitting through the projection area, the normal display of the virtual object on the user can not be influenced, the display quality of the virtual object observed by the user is more real, and the abnormal transparency and distortion are not easy to occur due to the superposition influence of the ambient light serving as the background in a real scene. And the second area in the shielding layer still keeps a light-transmitting state when the virtual object is projected, so that the transmission of ambient light is not hindered, and therefore, a user can normally observe the real environment around the virtual object to obtain the AR visual experience of the virtual object in the real environment.
It should be noted that the projection module projects a series of static solid-state images (frames) of a plurality of virtual objects at regular time, and projects the virtual objects to the projection layer at a speed (e.g. 16 images per second) that continuously changes and moves (plays) at a certain frequency, and the rate of projecting the virtual objects at regular time is the refresh frequency of the virtual objects.
Determining the projection frequency of the virtual object on the projection layer to be consistent with the refresh frequency of the virtual object, so that the first area corresponding to the projection area is shielded correspondingly at the same time point, namely, the projection area of the virtual object is changed on the projection layer, the first area is correspondingly changed on the shielding layer, thereby realizing the dynamic control of the light transmittance corresponding to the projection area of the virtual object in the projection layer, that is, the refresh frequency of the virtual object is matched with the first region switched from the transparent state to the shielding state, when the virtual object is dynamically changed, such as moving, rotating, expanding and contracting, materializing or transparentizing and the like, the shielding state and the area of the shielding layer can be synchronously changed with the change of the virtual object on the projection layer, and the first area is adjusted at any time along with the change of the projection area, so that the display effect of the virtual object is improved, and the use immersion feeling of a user is not damaged.
The embodiment of the present application does not specifically limit the shielding layer that can partially change the transmittance, and the shielding layer is taken as an electrochromic layer as an example to be described below.
The electrochromic layer uses an electrochromic material, the electrochromic material has an electrochromic property, and the electrochromic property is an optical property (reflectivity, transmittance, absorptivity and the like) of the material, generates a stable and reversible color change phenomenon under the action of an external electric field, and shows reversible changes of color and transparency in appearance.
Therefore, the shielding layer can be an electrochromic layer, the processing component switches the state of the first region from the light-transmitting state to the shielding state by changing the electric field intensity of the first region, and the second region is kept in the light-transmitting state by keeping the electric field intensity of the second region, namely, the light transmittance of the ambient light is controlled by changing the electric field intensity of the electrochromic layer.
Therefore, when the virtual object dynamically changes in the projection area of the projection layer, the electric field intensity is correspondingly adjusted in the first area of the shielding layer, the electric field intensity of the second area is maintained, complex control logic is not needed, the light transmittance of the ambient light in the first area can be reduced only through simple electric field intensity control, even the ambient light is not allowed to penetrate through the first area, and therefore the display quality of the virtual object observed by a user is more real. And the second region in the shielding layer still keeps the light-transmitting state when the virtual object is projected, and the transmission of ambient light can not be hindered, so that the user can normally observe the real environment around the virtual object, the AR visual experience of the virtual object in the real environment is obtained, the display effect of the virtual object is improved, and the dual requirements of the user on immersion in use are not damaged.
The following description will take an electrochromic grating module as an example.
Referring to fig. 10, a schematic diagram of an electrochromic layer according to an embodiment of the present disclosure is shown. The electro-variable grating module includes a first electrode layer 1001, a second electrode layer 1003, and a liquid crystal layer 1002. The liquid crystal layer 1002 is disposed between the first electrode layer 1001 and the second electrode layer 1003, and a plurality of liquid crystals are filled as a shielding material, the liquid crystals in the liquid crystal layer may be dark light-blocking colors, such as black, and the dark light-blocking colors have better effect on reducing the transmittance of ambient light, so as to reduce the influence of ambient light on the virtual object as much as possible.
Referring to fig. 11, the figure is a schematic view of an operation principle of an electrochromic grating module according to an embodiment of the present application. In a natural state of the liquid crystal, that is, when the electric field intensity between the first electrode layer 1001 and the second electrode layer 1003 is small or the electric field intensity is 0, the liquid crystal is randomly arranged in a free posture, and no gap or a small gap exists between the liquid crystals arranged in the free posture, thereby blocking ambient light from passing through the liquid crystal layer.
Referring to fig. 12, the figure is a schematic view of an operation principle of an electrochromic grating module according to an embodiment of the present application. The electric field intensity between the first electrode layer 1001 and the second electrode layer 1003 is changed, and the liquid crystal is orderly arranged along the vertical direction of the first electrode layer 1001 or the second electrode layer 1003, so that light can pass through the liquid crystal, and light transmission is realized.
Thus, in order to control the light transmittance of the first region, the processing element may control the first electrode layer 1001, and may apply the voltage V of the first electrode layer 1001 in the first regiondataChanging from the first voltage to the second voltage if the second voltage is equal to the voltage V of the second electrode layer 1003comIs less than a threshold value VlcI.e. | Vdata-Vcom|<VlcSince the distance between the first electrode layer 1001 and the second electrode layer 1003 is not changed, that is, the electric field intensity of the liquid crystal layer 1002 is lower than the field intensity required for maintaining the ordered posture of the liquid crystal, the liquid crystal in the first region in the liquid crystal layer 1002 is adjusted from the ordered posture to the free posture, and the ambient light is blocked from passing through the liquid crystal layer 1002, so that the state of the first region is switched from the transparent state to the shielding state.
By controlling the first electrode layer 1001, the voltage of the first electrode layer 1001 in the second region is maintained at the first voltage, if the first voltage and the field strength V of the second electrode layer 1003 are the samecomIs greater than or equal to a threshold value VlcI.e. | Vdata-Vcom|≥VlcSince the distance between the first electrode layer 1001 and the second electrode layer 1003 does not change in general, that is, the electric field intensity of the liquid crystal layer 1002 is greater than or equal to the field intensity required for maintaining the ordered posture of the liquid crystal, the liquid crystal in the second region in the liquid crystal layer is maintained in the ordered posture, and the second region is maintained in a light-transmitting state.
From this, through changing the electric field strength between first electrode layer 1001 and the second electrode layer 1003, liquid crystal layer 1002 can switch between shielding state and printing opacity state, realize the luminousness of control environment light through the electric field strength who changes the electro-variable grating module, the influence of environment light to virtual object in the first region has been reduced, can not influence virtual object to user's normal display, and can not obstruct the seeing through of second region environment light, do not influence the user and observe the real environment except virtual object, make the user obtain the virtual object in the AR visual experience in the real environment.
The first electrode layer and the second electrode layer are not particularly limited in this embodiment, and for example, the first electrode layer and the second electrode layer may be Indium Tin Oxide (ITO) electrodes.
As a possible implementation manner, the processing element may implement state switching of the control shielding layer through a switching tube, and for example, the switching tube is a field-effect transistor (FET), see fig. 13, which is a schematic diagram of an operation principle of an electrochromic grating module provided in an embodiment of the present application. By controlling the voltages of the gate line and the source line, the FET is turned on and off, and the voltage V of the first electrode layer 1001 is controlleddataFurther, the electric field intensity between the first electrode layer 1001 and the second electrode layer 1003 is controlled, and the liquid crystal layer 1002 can be switched between a shielding state and a light-transmitting state.
Next, the voltage of the second electrode layer is set to a high level and a low level, respectively.
The first method is as follows: the voltage of the second electrode layer is high.
The source electrode of the switching tube is connected with the source line, so that the source electrode of the switching tube has voltage, the gate electrode of the switching tube is connected with the gate line, the processing component controls the gate electrode of the switching tube in the first electrode layer 1001 in the first area, for example, the gate line is electrified, the gate electrode of the switching tube has voltage, the source electrode and the drain electrode of the switching tube are conducted, so that the voltage V of the first electrode layer 1001 in the first area is enableddataChanges from low level to high level, thereby realizing the voltage V of the first electrode layer 1001 to be in the first regiondataFrom a first voltage to a second voltage. At this time, the voltage V of the first electrode layer 1001dataVoltage V with the second electrode layer 1003comAre all high, if | Vdata-Vcom|<VlcThe liquid crystal in the first region in the liquid crystal layer 1002 is adjusted from the ordered posture to the free posture, and switching of the state of the first region from the light-transmitting state to the shielding state is realized.
The processing unit controls the grid of the switching tube in the first electrode layer 1001 in the second region, e.g. byThe gate line is powered off, the gate of the switching tube has no voltage, and the source and drain of the switching tube are turned off, so that the voltage V of the first electrode layer 1001 in the second regiondataAnd is kept at a low level, thereby achieving the effect of keeping the voltage of the first electrode layer 1001 at the second region at the first voltage. At this time, the voltage V of the first electrode layer 1001dataAt a low level, the voltage V of the second electrode layer 1003comIs high, if | Vdata-Vcom|≥VlcThe electric field intensity of the liquid crystal layer 1002 is greater than the required field intensity, and the liquid crystal in the second region in the liquid crystal layer is kept in an ordered posture, so that the second region is kept in a light-transmitting state.
The second method comprises the following steps: the voltage of the second electrode layer is low.
The source electrode of the switching tube is connected with the source line, so that the source electrode of the switching tube has voltage, the gate electrode of the switching tube is connected with the gate line, the processing component controls the gate electrode of the switching tube in the first electrode layer 1001 in the first area, for example, the gate line is powered off, the gate electrode of the switching tube does not have voltage, and the source electrode and the drain electrode of the switching tube are cut off, so that the voltage of the first electrode layer 1001 in the first area is changed from high level to low level, thereby realizing the purpose of changing the voltage V of the first electrode layer 1001 in the first areadataFrom a first voltage to a second voltage. At this time, the voltage V of the first electrode layer 1001dataVoltage V with the second electrode layer 1003comAre all low, if | Vdata-Vcom|<VlcThe liquid crystal in the first region in the liquid crystal layer 1002 is adjusted from the ordered posture to the free posture, and switching of the state of the first region from the light-transmitting state to the shielding state is realized.
The processing component controls the gate of the switching tube in the first electrode layer 1001 in the second area, for example, to energize the gate line, the gate of the switching tube has a voltage, and the source and the drain of the switching tube are turned on, so that the voltage of the first electrode layer in the first area is kept at a high level. At this time, the voltage V of the first electrode layer 1001dataAt high level, the voltage V of the second electrode layer 1003comIs low, if | Vdata-Vcom|≥VlcThe electric field intensity of the liquid crystal layer 1002 is greater than the required field intensity, and the liquid crystal in the second region in the liquid crystal layer is kept in an ordered posture, so that the second region is kept in a light-transmitting state.
Compare in mode one the voltage of second electrode layer for the high level, the voltage of second electrode layer is the low level in the mode two, and AR equipment is more power saving, and the consumption reduces, and its battery capacity also can reduce to the volume of AR equipment can be littleer, promotes the convenience and the man-machine friendly degree of AR equipment.
As one possible implementation manner, the switching tube provided in the first electrode layer 1001 corresponds to a display pixel of a virtual object, the second electrode layer 1003 includes an electrode provided to face the switching tube, and the first electrode layer and the second electrode layer can form an electric field when energized, so that the state of the first region is switched from the light-transmitting state to the shielding state by changing the electric field intensity of the first region, and the second region is maintained in the light-transmitting state by maintaining the electric field intensity of the second region.
Therefore, the display pixel lattice change capability can be realized through the electro-variable grating, namely, the overall or local light transmittance change of the shielding layer is realized, so that the display effect of the AR equipment is better. The embodiment of the present application does not specifically limit the number of the switching tubes corresponding to each pixel, for example, each pixel may be provided with an equivalent circuit diagram as shown in fig. 13, or a plurality of equivalent circuit diagrams as shown in fig. 13 may be provided, and an example is given below in which each pixel is provided with an equivalent circuit diagram as shown in fig. 13.
Fig. 14 and 15 are equivalent circuit diagrams of an electrochromic grating module corresponding to a display pixel according to an embodiment of the present application. In FIGS. 14 and 15, two gate lines are included, each corresponding to a voltage VevenAnd voltage VoddAt a gate line where a voltage VevenThe display pixel corresponding to the switch tube controlled by the gate line is located in the first region, and the voltage VoddThe display pixel corresponding to the switch tube controlled by the gate line is positioned in the second area.
By controlling the voltage VevenAnd voltage VoddGate lineControl of the light transmittance of the first and second regions is achieved, as shown in FIG. 14, by applying a voltage VevenThe gate line is powered on to supply voltage VoddThe gate line is powered off to make the voltage VevenVoltage V with the second electrode layer 1003comIs less than a threshold value VlcThe state of the first region is switched from a light-transmitting state to a shielding state, and the second region is kept in the light-transmitting state. In FIG. 15, for example, the supply voltage VevenThe gate line is powered off and supplied with voltage VoddThe gate line is powered down so that the voltage V is controlledevenVoltage V with the second electrode layer 1003comIs greater than or equal to a threshold value VlcThe state of the first region is switched from a shielding state to a light-transmitting state, and the second region is kept in the light-transmitting state.
It should be noted that, when the shielding layer is the electro-variable grating module, the optical display architecture scheme of the projection module in the AR device has a relatively good effect by using the optical waveguide scheme, which is more convenient for the electro-variable grating module to be closely attached to the display module during manufacturing, and better realizes alignment of the display pixels.
As a possible implementation manner, refer to fig. 16, which is a schematic diagram of an electrochromic layer provided in an embodiment of the present application. The electro-grating module may also include a polarizer layer 1004, an upper glass layer, and 1005 a lower glass layer 1006. In the display direction, there are a polarizer layer 1004, an upper glass layer 1005, a first electrode layer 1001, a liquid crystal layer 1002, a second electrode layer 1003, and a lower glass layer 1006 in this order from front to back.
Next, description will be made taking as an example that the user wears AR glasses as shown in fig. 3.
The processing component of the AR glasses can extract or separate a virtual object to be displayed firstly, determine a projection area of the virtual object in the projection layer, and drive the electro-variable grating module in the shielding layer while the projection component projects the virtual object to the projection layer of the display component, so that liquid crystals in the electro-variable grating module in the first area in the shielding layer corresponding to the projection area are adjusted to be in a free posture from an ordered posture, the purpose of shielding ambient light in the first area is achieved, the liquid crystals in the second area keep the ordered posture, and the ambient light can normally pass through. And the area needing to be shielded by the electro-variable grating module is changed in a matching way according to the change condition of the projection area where the projection component projects the virtual object, so that synchronous adjustment is realized by changing the area needing to be shielded by the electro-variable grating module and matching with various changes of movement, rotation, materialization or transparentization and the like of the virtual object in the visual area.
Therefore, the dynamic shading can be carried out by matching the electro-variable grating module with the virtual object, the display defects that the display color is deviated and the background environment is complex are caused by the fact that the ambient light outside the AR equipment passes through the lens and is overlapped with the light of the virtual object displayed by the AR equipment, a user can see the image with the more consistent display effect, the virtual object is an actual object which is closer to the actual object seen in the actual life, the semitransparent image is not often seen on the common AR equipment, and the reality sense is higher. Meanwhile, the light transmittance of the non-display area of the virtual object is good, so that a user can normally complete actual interactive operation, and the use experience of the user is improved.
The aforementioned display device may be a computer device, which may be a server, or may also be a terminal device, and the computer device provided in the embodiments of the present application will be described below from the perspective of hardware implementation. Fig. 17 is a schematic structural diagram of a server, and fig. 18 is a schematic structural diagram of a terminal device.
Referring to fig. 17, fig. 17 is a schematic diagram of a server 1400 according to an embodiment of the present application, where the server 1400 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1422 (e.g., one or more processors) and a memory 1432, and one or more storage media 1430 (e.g., one or more mass storage devices) for storing applications 1442 or data 1444. Memory 1432 and storage media 1430, among other things, may be transient or persistent storage. The program stored on storage medium 1430 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, a central processor 1422 may be disposed in communication with storage medium 1430 for executing a series of instruction operations on storage medium 1430 on server 1400.
The server 1400 may also include one or more power supplies 1426, one or more wired or wireless network interfaces 1450, one or more input-output interfaces 1458, and/or one or more operating systems 1441, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 17.
The CPU 1422 is configured to perform the following steps:
determining a projection region of a virtual object on a projection layer, the projection layer being disposed in a display component of an augmented reality device, the display component further including an occlusion layer disposed overlapping the projection layer in a display direction, the occlusion layer being behind the projection layer in the display direction;
determining a first area of the shielding layer corresponding to the projection area according to the position corresponding relation between the projection layer and the shielding layer;
and switching the state of the first area from a light-transmitting state to a shielding state, and keeping a second area except the first area in the shielding layer in the light-transmitting state, wherein the area in the light-transmitting state in the shielding layer does not obstruct the transmission of ambient light, and the area in the shielding state in the shielding layer obstructs the transmission of the ambient light.
Optionally, the CPU 1422 may further execute method steps of any specific implementation of the display method in the embodiment of the present application.
Referring to fig. 18, fig. 18 is a schematic structural diagram of a terminal device according to an embodiment of the present application. Fig. 18 is a block diagram illustrating a partial structure of a smartphone related to a terminal device provided in an embodiment of the present application, where the smartphone includes: a Radio Frequency (RF) circuit 1510, a memory 1520, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, a wireless fidelity (WiFi) module 1570, a processor 1580, and a power supply 1590. Those skilled in the art will appreciate that the smartphone configuration shown in fig. 18 is not limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the smartphone in detail with reference to fig. 18:
the RF circuit 1510 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 implements various functional applications and data processing of the smart phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smartphone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the smartphone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information input by the user or information provided to the user and various menus of the smartphone. The Display unit 1540 may include a Display panel 1541, and optionally, the Display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 18, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the smartphone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the smartphone.
The smartphone may also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that may turn off the display panel 1541 and/or backlight when the smartphone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the smartphone, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the smart phone, further description is omitted here.
Audio circuit 1560, speaker 1561, microphone 1562 may provide an audio interface between a user and a smartphone. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal; on the other hand, the microphone 1562 converts collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are processed by the output processor 1580 and then passed through the RF circuit 1510 for transmission to, for example, another smart phone, or output to the memory 1520 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the smart phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1570, and provides wireless broadband internet access for the user. Although fig. 18 shows WiFi module 1570, it is understood that it does not belong to the essential constitution of the smartphone and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the smartphone, connects various parts of the entire smartphone by using various interfaces and lines, and performs various functions of the smartphone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the smartphone. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The smartphone also includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 1580 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the smart phone may further include a camera, a bluetooth module, and the like, which are not described herein.
In an embodiment of the application, the smartphone includes a memory 1520 that can store program code and transmit the program code to the processor.
The processor 1580 included in the smart phone may execute the display method provided in the foregoing embodiments according to an instruction in the program code.
The embodiment of the present application further provides a computer-readable storage medium for storing a computer program, where the computer program is used to execute the display method provided by the foregoing embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the display method provided in the various alternative implementations of the above aspects.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium may be at least one of the following media: various media that can store program codes, such as read-only memory (ROM), RAM, magnetic disk, or optical disk.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of displaying, the method comprising:
determining a projection region of a virtual object on a projection layer, the projection layer being disposed in a display component of an augmented reality device, the display component further including an occlusion layer disposed overlapping the projection layer in a display direction, the occlusion layer being behind the projection layer in the display direction;
determining a first area of the shielding layer corresponding to the projection area according to the position corresponding relation between the projection layer and the shielding layer;
and switching the state of the first area from a light-transmitting state to a shielding state, and keeping a second area except the first area in the shielding layer in the light-transmitting state, wherein the area in the light-transmitting state in the shielding layer does not obstruct the transmission of ambient light, and the area in the shielding state in the shielding layer obstructs the transmission of the ambient light.
2. The method of claim 1, wherein a determination frequency for determining a projected area of the virtual object on the projection layer coincides with a refresh frequency of the virtual object.
3. The method according to claim 1 or 2, wherein the shielding layer is an electrochromic layer, and the switching the state of the first region from the light-transmitting state to the shielding state and the maintaining of the second region other than the first region in the shielding layer in the light-transmitting state comprises:
the state of the first region is switched from a light-transmitting state to a shielding state by changing the electric field intensity of the first region, and the second region is maintained in the light-transmitting state by maintaining the electric field intensity of the second region.
4. The method of claim 3, wherein the electrochromic layer is an electrochromic grating module comprising a first electrode layer, a second electrode layer, and a liquid crystal layer, wherein switching the state of the first region from a light-transmitting state to a shielding state by changing the electric field strength of the first region, and maintaining the second region in the light-transmitting state by maintaining the electric field strength of the second region comprises:
changing a voltage of the first electrode layer at the first region from a first voltage to a second voltage by controlling the first electrode layer;
if the difference between the second voltage and the voltage of the second electrode layer is smaller than a threshold value, the liquid crystal in the first region in the liquid crystal layer is adjusted from an ordered posture to a free posture, and the state of the first region is switched from the light-transmitting state to the shielding state;
maintaining the voltage of the first electrode layer at the second region to the first voltage by controlling the first electrode layer;
if the difference between the first voltage and the voltage of the second electrode layer is greater than or equal to the threshold value, the liquid crystal in the second region in the liquid crystal layer is kept in an ordered posture, and the second region is kept in the light-transmitting state.
5. The method according to claim 4, wherein if the voltage of the second electrode layer is at a high level, the changing the voltage of the first electrode layer in the first region from a first voltage to a second voltage by controlling the first electrode layer comprises:
by controlling a grid electrode of a switching tube in the first electrode layer of the first region, turning on a source electrode and a drain electrode of the switching tube, so that the voltage of the first electrode layer in the first region is changed from a low level to a high level;
the maintaining the voltage of the first electrode layer at the second region to the first voltage by controlling the first electrode layer includes:
and turning off the source and the drain of the switching tube by controlling the grid of the switching tube in the first electrode layer of the second region, so that the voltage of the first electrode layer of the second region is kept at a low level.
6. The method according to claim 4, wherein if the voltage of the second electrode layer is at a low level, the changing the voltage of the first electrode layer in the first region from a first voltage to a second voltage by controlling the first electrode layer comprises:
turning off a source and a drain of a switching tube by controlling a gate of the switching tube in the first electrode layer in the first region, so that the voltage of the first electrode layer in the first region changes from a high level to a low level;
the maintaining the voltage of the first electrode layer at the second region to the first voltage by controlling the first electrode layer includes:
and turning on a source and a drain of a switching tube by controlling a grid of the switching tube in the first electrode layer in the second region, so that the voltage of the first electrode layer in the first region is kept at a high level.
7. The method of claim 4, wherein the electro-variable optical grating module comprises a polarizer layer, an upper glass layer, the first electrode layer, the liquid crystal layer, the second electrode layer and a lower glass layer in sequence from front to back in the display direction.
8. The method of claim 4, wherein the liquid crystal in the liquid crystal layer is a dark color that is opaque.
9. An augmented reality device comprising a display component, a projection component and a processing component, the display component comprising a projection layer and an obscuring layer arranged to overlap in a display direction, the obscuring layer being behind the projection layer in the display direction;
the projection component is used for projecting a virtual object on the projection layer;
the processing component is used for determining a first region of the projection region corresponding to the shielding layer according to the position corresponding relation between the projection layer and the shielding layer when the projection region of the virtual object on the projection layer is determined; and switching the state of the first area from a light-transmitting state to a shielding state, and keeping a second area except the first area in the shielding layer in the light-transmitting state, wherein the area in the light-transmitting state in the shielding layer does not obstruct the transmission of ambient light, and the area in the shielding state in the shielding layer obstructs the transmission of the ambient light.
10. The apparatus of claim 9, wherein the shielding layer is an electro-variable grating module, the electro-variable grating module comprises a first electrode layer, a second electrode layer and a liquid crystal layer, the first electrode layer is provided with a switching tube corresponding to a display pixel, and the second electrode layer comprises an electrode opposite to the switching tube.
11. The apparatus of claim 9, wherein a determination frequency for determining a projected area of the virtual object on the projection layer coincides with a refresh frequency of the virtual object.
12. The apparatus of any of claims 9-11, wherein the masking layer is an electrochromic layer, and wherein the processing assembly is configured to:
the state of the first region is switched from a light-transmitting state to a shielding state by changing the electric field intensity of the first region, and the second region is maintained in the light-transmitting state by maintaining the electric field intensity of the second region.
13. The apparatus of claim 12, wherein the electrochromic layer is an electrochromic light grid module comprising a first electrode layer, a second electrode layer, and a liquid crystal layer, the processing assembly to:
changing a voltage of the first electrode layer at the first region from a first voltage to a second voltage by controlling the first electrode layer;
if the difference between the second voltage and the voltage of the second electrode layer is smaller than a threshold value, the liquid crystal in the first region in the liquid crystal layer is adjusted from an ordered posture to a free posture, and the state of the first region is switched from the light-transmitting state to the shielding state;
maintaining the voltage of the first electrode layer at the second region to the first voltage by controlling the first electrode layer;
if the difference between the first voltage and the voltage of the second electrode layer is greater than or equal to the threshold value, the liquid crystal in the second region in the liquid crystal layer is kept in an ordered posture, and the second region is kept in the light-transmitting state.
14. A computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 1-8 according to instructions in the program code.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of any one of claims 1-8.
CN202110377394.2A 2021-04-08 2021-04-08 Display method and related device Pending CN112950791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110377394.2A CN112950791A (en) 2021-04-08 2021-04-08 Display method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110377394.2A CN112950791A (en) 2021-04-08 2021-04-08 Display method and related device

Publications (1)

Publication Number Publication Date
CN112950791A true CN112950791A (en) 2021-06-11

Family

ID=76231137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110377394.2A Pending CN112950791A (en) 2021-04-08 2021-04-08 Display method and related device

Country Status (1)

Country Link
CN (1) CN112950791A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741162A (en) * 2021-09-06 2021-12-03 联想(北京)有限公司 Image projection method and system
CN114779948A (en) * 2022-06-20 2022-07-22 广东咏声动漫股份有限公司 Method, device and equipment for controlling instant interaction of animation characters based on facial recognition
US11747628B2 (en) * 2021-12-13 2023-09-05 Toyota Jidosha Kabushiki Kaisha AR glasses

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074724A1 (en) * 2018-08-31 2020-03-05 Magic Leap, Inc. Spatially-resolved dynamic dimming for augmented reality device
CN111399230A (en) * 2020-05-12 2020-07-10 潍坊歌尔电子有限公司 Display system and head-mounted display equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074724A1 (en) * 2018-08-31 2020-03-05 Magic Leap, Inc. Spatially-resolved dynamic dimming for augmented reality device
CN111399230A (en) * 2020-05-12 2020-07-10 潍坊歌尔电子有限公司 Display system and head-mounted display equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741162A (en) * 2021-09-06 2021-12-03 联想(北京)有限公司 Image projection method and system
US11747628B2 (en) * 2021-12-13 2023-09-05 Toyota Jidosha Kabushiki Kaisha AR glasses
CN114779948A (en) * 2022-06-20 2022-07-22 广东咏声动漫股份有限公司 Method, device and equipment for controlling instant interaction of animation characters based on facial recognition

Similar Documents

Publication Publication Date Title
CN112950791A (en) Display method and related device
US20190279407A1 (en) System and method for augmented reality interaction
US11287905B2 (en) Trackability enhancement of a passive stylus
US11645809B2 (en) Intelligent stylus beam and assisted probabilistic input to element mapping in 2D and 3D graphical user interfaces
US9864198B2 (en) Head-mounted display
US10866820B2 (en) Transitioning between 2D and stereoscopic 3D webpage presentation
US9979946B2 (en) I/O device, I/O program, and I/O method
CN102681651B (en) A kind of user interactive system and method
US10701346B2 (en) Replacing 2D images with 3D images
WO2014128752A1 (en) Display control device, display control program, and display control method
CN105639818A (en) Intelligent safety helmet based on augmented reality, space scanning and gesture recognition technologies
US20180366091A1 (en) Hmd device and method for controlling same
US10257500B2 (en) Stereoscopic 3D webpage overlay
US10701347B2 (en) Identifying replacement 3D images for 2D images via ranking criteria
US10171800B2 (en) Input/output device, input/output program, and input/output method that provide visual recognition of object to add a sense of distance
CN109144176A (en) Display screen interactive display method, terminal and storage medium in virtual reality
CN110335200A (en) A kind of anti-method, apparatus and the relevant device of distorting of virtual reality
CN108476316A (en) A kind of 3D display method and user terminal
CN110223237A (en) Adjust the method and terminal device of image parameter
CN105915887A (en) Display method and system of stereo film source
KR20040010967A (en) Personal Digital Assistant using head mounted display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046488

Country of ref document: HK