CN111553972A - Method, apparatus, device and storage medium for rendering augmented reality data - Google Patents

Method, apparatus, device and storage medium for rendering augmented reality data Download PDF

Info

Publication number
CN111553972A
CN111553972A CN202010368617.4A CN202010368617A CN111553972A CN 111553972 A CN111553972 A CN 111553972A CN 202010368617 A CN202010368617 A CN 202010368617A CN 111553972 A CN111553972 A CN 111553972A
Authority
CN
China
Prior art keywords
resolution
augmented reality
rendered
rendering
reality data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010368617.4A
Other languages
Chinese (zh)
Other versions
CN111553972B (en
Inventor
杨安宁
申雪岑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010368617.4A priority Critical patent/CN111553972B/en
Publication of CN111553972A publication Critical patent/CN111553972A/en
Application granted granted Critical
Publication of CN111553972B publication Critical patent/CN111553972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for rendering augmented reality data, electronic equipment and a computer-readable storage medium, and relates to the technical field of data rendering. One embodiment of the method comprises: determining a camera view according to the field angle and the position of the camera; determining the human eye visual field according to the retina position and the pupil position of the wearer; determining a portion of coincidence of the field of view of the human eye and the field of view of the camera; rendering the augmented reality data to be rendered of the overlapped part according to a preset first resolution; rendering the augmented reality data to be rendered, which are not the overlapped part, according to a preset second resolution, wherein the first resolution is larger than the second resolution. The embodiment combines the human eye vision to determine the overlapped part, and only renders the augmented reality data to be rendered corresponding to the overlapped part according to higher resolution, thereby achieving the purpose of reducing the computation workload.

Description

Method, apparatus, device and storage medium for rendering augmented reality data
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to the technical field of data rendering.
Background
The wearer can see original image data (such as mountains and attics on mountains, the original image data is obtained by shooting in real time by a camera on the AR glasses) superimposed with Augmented Reality data (such as names of attics on mountains) by wearing the AR glasses, so that the information acquisition capability and convenience are greatly improved.
In the prior art, when rendering augmented reality data to be rendered, the augmented reality data to be rendered, which needs to be rendered according to a higher resolution, is often directly and simply determined based on a camera view, and the determined augmented reality data to be rendered is rendered according to the higher resolution.
Disclosure of Invention
The embodiment of the application provides a method and a device for rendering augmented reality data, electronic equipment and a computer-readable storage medium.
In a first aspect, an embodiment of the present application provides a method for rendering augmented reality data, including: determining a camera view according to the field angle and the position of the camera; determining the human eye visual field according to the retina position and the pupil position of the wearer; determining the overlapping part of the human eye vision and the camera vision; rendering the augmented reality data to be rendered of the coincidence part according to a preset first resolution; rendering the augmented reality data to be rendered of the non-overlapped part according to a preset second resolution; wherein the first resolution is greater than the second resolution.
In a second aspect, an embodiment of the present application provides an apparatus for rendering augmented reality data, including: a camera field of view determination unit configured to determine a camera field of view from a field angle and a position of the camera; a human eye visual field determining unit configured to determine a human eye visual field from a retina position and a pupil position of a wearer; an overlapping portion determining unit configured to determine an overlapping portion of a human eye field of view and a camera field of view; the first resolution rendering unit is configured to render the overlapped part of the augmented reality data to be rendered according to a preset first resolution; the second resolution rendering unit is configured to render the non-overlapped part of the augmented reality data to be rendered according to a preset second resolution; wherein the first resolution is greater than the second resolution.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for rendering augmented reality data as described in any implementation of the first aspect when executed.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement a method for rendering augmented reality data as described in any implementation manner of the first aspect when executed.
According to the method, the device, the electronic equipment and the computer-readable storage medium for rendering the augmented reality data, the camera view field is determined according to the field angle and the position of the camera; determining the human eye visual field according to the retina position and the pupil position of the wearer; then, determining the overlapping part of the human eye vision and the camera vision; and finally, rendering the to-be-rendered augmented reality data of the overlapped part according to a preset first resolution ratio, and rendering the to-be-rendered augmented reality data of the non-overlapped part according to a preset second resolution ratio, wherein the first resolution ratio is greater than the second resolution ratio. Different from the prior art scheme that the augmented reality data to be rendered in the camera view are rendered according to a higher resolution ratio, the method and the device fully consider that the human eye view is focused on only one part of the camera view, determine the augmented reality data to be rendered in the minimum range according to the higher resolution ratio by determining the human eye view and solving the intersection of the human eye view and the camera view, reduce the calculation force requirement on low-calculation-force equipment such as AR glasses, reduce the occurrence frequency of the karton phenomenon, and improve the use experience of a wearer.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for rendering augmented reality data according to the present application;
FIG. 3 is a flow diagram of another embodiment of a method for rendering augmented reality data according to the present application;
FIG. 4 is a schematic view of a field of view in a particular application scenario of a method for rendering augmented reality data according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for rendering augmented reality data according to the present application;
fig. 6 is a block diagram of an electronic device for rendering augmented reality data suitable for use to implement embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods, apparatuses, electronic devices and computer-readable storage media for rendering augmented reality data of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101, a human eye parameter sensor 102, a network 103, and AR glasses 104. The network 103 is the medium that provides the communication link between the camera 101, the eye parameter sensors 102 and the AR glasses 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user can use the AR glasses 104 to perform information interaction with the camera 101 and the eye parameter sensor 102 through the network 103, that is, the AR glasses 104 transmit or receive data or information transmitted from the camera 101 and the eye parameter sensor 102 to or from the camera 101 and the eye parameter sensor 102 through the network 103. To implement different functions, various applications may be installed on the AR glasses 104, such as an augmented reality type application, a communication type application, a map type application, and so on.
The camera 101 is used for shooting to obtain original image data and providing camera parameters; the human eye parameter sensor 102 is used for acquiring human eye parameters of a user; the AR glasses 104 determine the camera view through the camera parameters, determine the human eye view through the human eye parameters, and determine the coincidence region according to the camera view and the human eye view, so that the augmented reality data to be rendered in the coincidence region and the non-coincidence region are rendered according to different resolutions.
It should be noted that the camera 101 and the eye parameter sensor 102 may exist separately from the components of the AR glasses 104, and the original image data, the camera parameters, and the eye parameters are transmitted to the AR glasses 104 through the network 103; the camera 101 and the eye parameter sensor 102 may also be integrated into the AR glasses 104, that is, built-in components that together form the AR glasses 104, and at this time, the original image data, the camera parameters, and the eye parameters may be directly transmitted to the subsequent operation components of the AR glasses 104 through the local data transmission mechanism without the network 103.
It should be further noted that the augmented reality data to be rendered may be stored locally in the AR glasses 104 in advance, or the AR glasses 104 may temporarily send an acquisition request to a data storage server storing a large amount of augmented reality data to be rendered, so as to acquire, through the network 102, the augmented reality data to be rendered that is required to be returned by the data storage server. Meanwhile, in some cases, the rendering of the augmented data to be rendered may also be carried out by the AR glasses 104 to the electronic device with stronger computing power, and directly receive all or part of the augmented reality data rendered by the electronic device.
It should be noted that the method for rendering augmented reality data provided in the following embodiments of the present application is generally performed by the AR glasses 104, and accordingly, the apparatus for rendering augmented reality data is generally disposed in the AR glasses 104.
It should be understood that the number of cameras 101, eye parameter sensors 102, and AR glasses 104 in fig. 1 is merely illustrative, and that there may be any number of cameras 101, eye parameter sensors 102, and AR glasses 104, as desired for an implementation. Moreover, the correspondence relationship is not only unique, but also may be one-to-many.
With continued reference to fig. 2, an implementation flow 200 for rendering augmented reality data according to one embodiment of the present application is shown. The method comprises the following steps:
step 201, determining the camera field of view according to the field angle and the position of the camera.
In this embodiment, the executing subject (e.g., AR glasses 104 shown in fig. 1) of the method for rendering augmented reality data may acquire the angle of view and the position of the camera from a local or non-local component (e.g., camera 101 shown in fig. 1). When it is a local component integrated on the execution main body, the camera parameters including the angle of view and the position of the camera need to be obtained only by local reading; when it is a non-native component existing separately from the execution main body, camera parameters including the angle of view and the position of the camera can be acquired by the execution main body sending an acquisition command thereto or accepting data periodically returned therefrom.
The step aims to determine the camera view field through the view field angle and the position of the camera by the executing body, wherein the view field angle is also called as the view field in optical engineering, and the size of the view field determines the view field range of the optical instrument, so that the camera view field can be calculated according to the camera view field when the position of the camera is used as basic positioning data.
In step 202, the field of vision of the human eye is determined based on the position of the retina and the position of the pupil of the wearer.
In this embodiment, the executing subject (e.g., AR glasses 104 shown in fig. 1) of the method for rendering augmented reality data may acquire the angle of view and the position of the camera from a local or non-local component (e.g., camera 101 shown in fig. 1). When it is a local component integrated on the execution body, the parameters of the human eye including the retina position and pupil position of the wearer can be obtained only by local reading, for example, directly from a human eye parameter sensor built in the execution body on the side facing the wearer; when it is a non-native component that exists separately from the executing entity, the parameters of the human eye, including the retinal location and pupil location of the wearer, may be obtained by the executing entity sending a capture command to it or accepting a periodic return to the executing entity, such as from other ocular devices of non-AR glasses that are simultaneously worn by the wearer.
This step is intended to determine the field of vision of the human eye through the retinal and pupil positions of the wearer by the executing entity as described above, and a method including, but not limited to, the specific determination of the field of vision of the human eye may be implemented by executing the following steps by the executing entity:
taking the upper and lower edges of the retina of the wearer as first and second starting points, respectively;
taking the pupil position of the wearer as an end point;
connecting the first starting point and the second starting point with the end point respectively, and extending the connecting lines to obtain the human eye viewing cone;
and determining the visual field of the real scene in the coverage range of the human eye cones as the visual field of the human eyes.
According to the specific implementation scheme, the retina wraps the vitreous body similar to a sphere, and the pupil is taken as one point on the vitreous body, so that the upper edge and the lower edge of the vitreous body wrapping the retina are respectively taken as two starting points, the pupil position is taken as an end point, the fan-shaped human eye viewing cone based on the pinhole imaging principle can be obtained by connecting the connecting lines between the starting points and the end points and extending reversely, and finally the actual scene view field in the coverage range of the human eye viewing cone can be determined as the human eye view field. In the above-mentioned solution, the first starting point and the second starting point determined based on the retina are used to ensure that the range included in the human eye's cone of vision coincides with the actual field of vision as much as possible. In special cases, if some wearers have problems such as damage of the retina part, the determined position of the starting point should be adjusted accordingly, so that the obtained human eye cone is consistent with the actual visual field.
Step 203, determining the coincident part of the human eye vision field and the camera vision field.
On the basis of step 201 and step 202, this step is intended to determine the overlapped part of the two according to the human eye vision and the camera vision by the executing body, and simultaneously, the non-overlapped part is determined along with the determination of the overlapped part.
And step 204, rendering the augmented reality data to be rendered of the overlapped part according to a preset first resolution.
On the basis of step 203, the step is to render the augmented reality data to be rendered of the overlapped part according to a preset first resolution by the executing subject, and since the overlapped part is an intersection of a human eye visual field and a camera visual field, the overlapped part is a partial area watched by the wearer, and in order to present a better visual effect to the wearer, the step renders the augmented reality data with the first resolution with higher resolution. It should be understood that the first resolution described in this step should at least make it easier for the wearer to clearly see the content of the augmented reality data, and the preset first resolution is preferably 720P or more, taking the picture content as an example. Although the resolution of 360P is higher than that of 180P, if 360P does not provide good visual effect, it should not fall within the range of the first resolution described in this step.
Furthermore, although the augmented reality data is rendered and then presented to the wearer in the form of a visual image, the resolution required for the wearer to see the content clearly after the augmented reality data is converted into the visual image may also be affected according to the data type to which the augmented reality data belongs in the rendering earlier stage, and in the case of three types of text, picture and video, the first resolution rendering effect described in this step may also be achieved by using different resolutions during rendering, instead of rendering all types of data according to the same first resolution, thereby reducing the amount of operation required for rendering as much as possible.
And step 205, rendering the augmented reality data to be rendered of the non-overlapped part according to a preset second resolution.
On the basis of step 203, the step is intended to render, by the executing entity, the augmented reality data to be rendered at the non-overlapped part at the preset second resolution, and since the non-overlapped part is a part of the field of vision of the human eyes and the field of vision of the camera except for the intersection of the two, the non-overlapped part is a partial area not watched by the wearer, therefore, in order to reduce the calculation power required for rendering the augmented reality data as much as possible, the step renders the augmented reality data at the second resolution lower than the first resolution, that is, the second resolution is lower than the first resolution. It should be understood that the second resolution described in this step should at least make the content of the augmented reality data clear to the wearer, and the preset second resolution is preferably 480P or above, taking the picture content as an example. Although the resolution of 180P is also low relative to 360P, 180P is hardly visible to the wearer and should not fall within the second resolution described in this step.
Corresponding to the description of the data types affecting the resolution mentioned in step 204, different data types may also correspond to different second resolutions, instead of rendering all types of data at the same second resolution, so as to reduce the amount of computation required for rendering as much as possible.
Further, in some other embodiments of the present application, a first resolution canvas may be determined according to the preset first resolution, and a second resolution canvas may be determined according to the preset second resolution, so as to render the overlapped portion of the augmented reality data to be rendered on the first resolution canvas according to the first resolution, and render the non-overlapped portion of the augmented reality data to be rendered on the second resolution canvas according to the second resolution, so as to finally obtain the content of the fusion of the real scene data and the augmented reality data presented to the wearer through the superposition between different canvases. Different from other modes of obtaining fused contents, the mode of overlapping the canvas is adopted, and due to the fact that the canvas independently exist, the mutual influence among the contents of all parts is not needed to be worried about, the effect is better, and the targeted modification is convenient to carry out.
The superposition between the canvases can be realized by executing the following scheme through the execution main body:
acquiring original image data which is obtained by shooting through a camera and corresponds to the field of view of the camera;
drawing original image data on an original image canvas corresponding to the camera view;
and stretching the rendered first resolution canvas and the rendered second resolution canvas and then overlapping the stretched first resolution canvas and the stretched second resolution canvas on the original image canvas.
It should be noted that, if the scheme that the first resolution and the second resolution are determined based on the data type is adopted in step 204 and step 205, when the canvas is formed, a plurality of different first resolution canvases and second resolution canvases may be formed according to the difference of the resolutions, and finally, the first resolution canvases and the second resolution canvases are overlapped.
According to the method for rendering augmented reality data, the field of view of a camera is determined according to the field angle and the position of the camera; determining the human eye visual field according to the retina position and the pupil position of the wearer; then, determining the overlapping part of the human eye vision and the camera vision; and finally, rendering the to-be-rendered augmented reality data of the overlapped part according to a preset first resolution ratio, and rendering the to-be-rendered augmented reality data of the non-overlapped part according to a preset second resolution ratio. Different from the prior art scheme that the augmented reality data to be rendered in the camera view are rendered according to a higher resolution ratio, the method and the device fully consider that the human eye view is focused on only one part of the camera view, determine the augmented reality data to be rendered in the minimum range according to the higher resolution ratio by determining the human eye view and solving the intersection of the human eye view and the camera view, reduce the calculation force requirement on low-calculation-force equipment such as AR glasses, reduce the occurrence frequency of the karton phenomenon, and improve the use experience of a wearer.
On the basis of the above embodiments, the present application further provides an implementation flow 300 for another embodiment of rendering augmented reality data through fig. 3, which is different from the implementation flow 200 shown in fig. 2, in which the implementation flow 300 selects different resolutions for rendering the type of the augmented reality data to be rendered to perform rendering of the first resolution and the second resolution, so as to further reduce the computation required for rendering. The method comprises the following steps:
step 301, determining a camera field of view according to the field angle and the position of the camera.
Step 302, determining a field of vision of a human eye based on a position of a retina and a position of a pupil of a wearer.
Step 303, determining the overlapping part of the human eye vision field and the camera vision field.
The above steps 301-303 are the same as the step 201-203 shown in fig. 2, and the contents of the same portions refer to the corresponding portions of the previous embodiment, which are not described herein again.
And step 304, acquiring the data type contained in the augmented reality data to be rendered of the overlapped part.
Step 305, determining a target first resolution corresponding to each type of augmented reality data to be rendered according to the first corresponding table.
Wherein, the first mapping table records in advance the corresponding relationship between each data type and each first resolution (i.e. each higher resolution), such as text-480P, picture-720P, video-720P, etc. It should be understood that 480P and 720P are given above only as a way of describing resolution, and may be expressed by other resolution size description ways.
And step 306, rendering each type of augmented reality data to be rendered of the overlapped part according to the corresponding target first resolution.
In steps 304-306, by obtaining the data type included in the augmented reality data to be rendered at the overlapped part and combining the corresponding relationship recorded in the first corresponding table, the effect of rendering the augmented reality data to be rendered at the overlapped part at different first resolutions according to different data types is realized. Compared with the scheme of rendering according to 720P, the text is only rendered according to 480P, so that the computation required by a part of rendering is reduced.
And 307, acquiring the data type contained in the augmented reality data to be rendered of the non-overlapped part.
And 308, determining a target second resolution corresponding to each type of augmented reality data to be rendered according to the second corresponding table.
Wherein, the second correspondence table records in advance the correspondence between each data type and each second resolution (i.e. each lower resolution), such as text-360P, picture-480P, video-480P, etc.
And 309, rendering each type of augmented reality data to be rendered of the non-overlapped part according to the corresponding target second resolution.
In steps 307 to 309, by obtaining the data type included in the augmented reality data to be rendered in the non-overlapped part and combining the corresponding relationship recorded in the second correspondence table, the effect of rendering the augmented reality data to be rendered in the non-overlapped part at different second resolutions according to different data types is achieved. Compared with the scheme of rendering according to 480P, the text is only rendered according to 360P, so that the computation required by a part of rendering is reduced.
For a deeper understanding, the present application also provides a specific implementation scheme in combination with a specific application scenario, please refer to the view diagram shown in fig. 4.
In the application scene, an original image camera is arranged in one side of the AR glasses, which is consistent with the visual field direction of the wearer, and is used for shooting to obtain original image data, the visual field of the original image camera is represented by a square in figure 4, and all vertical lines are used in the square to cover the square; the side, facing the wearer, of the AR glasses is internally provided with a human eye parameter sensor, the human eye parameter sensor is used for acquiring human eye parameters including a retina position and a pupil position, so that the AR glasses can determine the human eye vision field based on the human eye parameters, the human eye vision field is represented by an oval shape in figure 4, and all transverse lines are used in the oval shape for covering.
As shown in fig. 4, augmented reality data A, B, C, F, G, H and I to be rendered exist in a camera field of view of a square shape, and augmented reality data A, B, C, D and E to be rendered exist in a human eye field of view of an oval shape. It can be seen that the overlapping part of the oval human eye vision field and the square camera vision field forms a gridline, i.e. the overlapping part is the coverage area of the gridline.
Therefore, the augmented reality data to be rendered corresponding to the overlapped part is only A, B, C, and meanwhile, since a is plain text content, B is plain picture content, and C is plain video content, according to the preset first correspondence table, it is determined to render a according to 480P, and render B and C according to 720P, and render a on the text high-resolution canvas corresponding to 480P, and render B and C on the picture and video high-resolution canvas corresponding to 720P. Similarly, the low resolution selected for rendering is determined subsequently according to the data types of D, E, F, G, H and I, respectively, in combination with the second mapping table, so as to obtain a corresponding number of low resolution canvases, and finally, the original image canvases rendering the original image data corresponding to the camera view are stretched and superimposed, so that the content finally presented to the wearer can be obtained.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for rendering augmented reality data, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices including AR glasses.
As shown in fig. 5, the apparatus 500 for rendering augmented reality data of the present embodiment may include: a camera view determination unit 501, a human eye view determination unit 502, an overlap determination unit 503, a first resolution rendering unit 504, and a second resolution rendering unit 505. Wherein the camera view determining unit 501 is configured to determine a camera view according to a field angle and a position of the camera; a human eye visual field determination unit 502 configured to determine a human eye visual field from a retina position and a pupil position of the wearer; an overlap portion determining unit 503 configured to determine an overlap portion of the human eye view and the camera view; a first resolution rendering unit 504 configured to render the overlapped portion of the augmented reality data to be rendered at a preset first resolution; and a second resolution rendering unit 505 configured to render the non-overlapped part of the augmented reality data to be rendered at a preset second resolution.
In the present embodiment, in the apparatus 500 for rendering augmented reality data: the detailed processing and the technical effects of the camera view determining unit 501, the eye view determining unit 502, the overlapping portion determining unit 503, the first resolution rendering unit 504, and the second resolution rendering unit 505 can refer to the related description of step 201 and step 205 in the embodiment corresponding to fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the human eye vision field determination unit 502 may be further configured to:
taking the upper and lower edges of the retina of the wearer as first and second starting points, respectively;
taking the pupil position of the wearer as an end point;
connecting the first starting point and the second starting point with the end point respectively, and extending the connecting lines to obtain the human eye viewing cone;
and determining the visual field of the real scene in the coverage range of the human eye cones as the visual field of the human eyes.
In some optional implementations of this embodiment, the apparatus may further include:
a human eye parameter acquisition unit configured to acquire a retina position and a pupil position of the wearer through a human eye parameter sensor before determining a human eye visual field according to the retina and pupil position of the wearer; wherein the eye parameter sensor is built in the side of the AR glasses facing the wearer.
In some optional implementations of this embodiment, the first resolution rendering unit 504 may be further configured to:
acquiring a data type contained in the augmented reality data to be rendered of the overlapped part;
determining a target first resolution corresponding to each type of augmented reality data to be rendered according to the corresponding table; wherein, the corresponding relation between each data type and each first resolution is pre-recorded in the corresponding table;
rendering each type of augmented reality data to be rendered of the coincidence portion at a corresponding target first resolution.
In some optional implementations of this embodiment, the apparatus may further include:
a first resolution canvas determination unit configured to determine a first resolution canvas according to a first resolution;
a second resolution canvas determination unit configured to determine a second resolution canvas according to a second resolution;
correspondingly, the first resolution rendering unit 504 may be further configured to:
rendering the overlapped part of the augmented reality data to be rendered on a first resolution canvas according to a first resolution;
correspondingly, the second resolution rendering unit 505 may be further configured to:
and rendering the non-overlapped part of the augmented reality data to be rendered on the second resolution canvas according to the second resolution.
In some optional implementations of this embodiment, the apparatus may further include:
an original image data acquisition unit configured to acquire original image data corresponding to a camera view field captured by a camera;
an original image data rendering unit configured to render original image data on an original image canvas corresponding to a camera view;
and the superposition processing unit is configured to superpose the rendered first resolution canvas and the rendered second resolution canvas on the original image canvas after being stretched.
The embodiment of the apparatus corresponding to the embodiment of the method exists, and is different from the prior art scheme that the augmented reality data to be rendered in the camera view are rendered according to a higher resolution directly, and the apparatus for rendering augmented reality data provided in the embodiment also fully considers that the human eye view is usually focused on only a part of the camera view, so that the augmented reality data to be rendered in a minimum range according to a higher resolution is determined by determining the human eye view and solving an intersection of the human eye view and the camera view, a calculation power requirement on low-calculation-power equipment such as AR glasses is reduced, a frequency of occurrence of a karton phenomenon is reduced, and a use experience of a wearer is improved.
According to an embodiment of the present application, an electronic device and a computer-readable storage medium are also provided.
As shown in fig. 6, a block diagram of an electronic device for rendering augmented reality data according to an embodiment of the application is shown. Electronic devices are intended to represent various forms of digital computers, such as AR glasses, laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for rendering augmented reality data provided herein. A non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform a method for rendering augmented reality data provided herein.
The memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for rendering augmented reality data in the embodiments of the present application (for example, the camera view determining unit 501, the human eye view determining unit 502, the coinciding portion determining unit 503, the first resolution rendering unit 504, and the second resolution rendering unit 505 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing by executing non-transitory software programs, instructions and modules stored in the memory 602, that is, implements the method for rendering augmented reality data in the above method embodiment.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store various types of data created by the electronic device for rendering the enhanced beam data when in use, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 602 optionally includes memory located remotely from processor 601, which may be connected over a network to an electronic device for rendering augmented reality data. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for rendering augmented reality data may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus for rendering augmented reality data, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Different from the prior art scheme that the augmented reality data to be rendered in the camera view are rendered according to a higher resolution ratio, the technical scheme of the embodiment of the application also fully considers that the human eye view is usually focused on only one part of the camera view, so that the augmented reality data to be rendered in the minimum range according to the higher resolution ratio is determined by determining the human eye view and solving the intersection of the human eye view and the camera view, the calculation power requirement on low-calculation-power equipment such as AR glasses is reduced, the occurrence frequency of the karton phenomenon is reduced, and the use experience of a wearer is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method for rendering augmented reality data, comprising:
determining a camera view according to the field angle and the position of the camera;
determining the human eye visual field according to the retina position and the pupil position of the wearer;
determining a coincident portion of the human eye field of view and the camera field of view;
rendering the augmented reality data to be rendered of the overlapped part according to a preset first resolution;
rendering the augmented reality data to be rendered, which are not the overlapped part, according to a preset second resolution; wherein the first resolution is greater than the second resolution.
2. The method of claim 1, wherein determining the human eye field of view from the wearer's retina and pupil positions comprises:
taking the upper and lower edges of the wearer's retina as first and second starting points, respectively;
taking the pupil position of the wearer as an endpoint;
connecting the first starting point and the second starting point with the end point respectively, and extending the connecting line to obtain a human eye viewing cone;
and determining the visual field of the real scene in the coverage range of the human eye cones as the visual field of the human eyes.
3. The method of claim 1, wherein prior to determining the human eye's visual field from the wearer's retina and pupil location, further comprising:
acquiring the retina position and the pupil position of the wearer through a human eye parameter sensor; wherein the eye parameter sensor is built-in to the AR glasses on a side facing the wearer.
4. The method according to claim 1, wherein rendering the augmented reality data to be rendered of the overlapped part at a preset first resolution comprises:
acquiring a data type contained in the augmented reality data to be rendered of the overlapped part;
determining a target first resolution corresponding to each type of augmented reality data to be rendered according to the corresponding table; wherein, the corresponding relation between each data type and each first resolution is pre-recorded in the corresponding table;
and rendering each type of augmented reality data to be rendered of the overlapped part according to the corresponding target first resolution.
5. The method of any of claims 1 to 4, further comprising:
determining a first resolution canvas according to the first resolution;
determining a second resolution canvas according to the second resolution;
correspondingly, will be right the augmented reality data that will render of coincidence portion is according to predetermined first resolution ratio and is rendered, include:
rendering the overlapped part of augmented reality data to be rendered on the first resolution canvas according to the first resolution;
correspondingly, to non-the augmented reality data to be rendered of the coincidence part is rendered according to a preset second resolution, including:
and rendering the augmented reality data to be rendered, which are not the overlapped part, on the second resolution canvas according to the second resolution.
6. The method of claim 5, further comprising:
acquiring original image data which is obtained by shooting through the camera and corresponds to the camera view;
drawing the original image data on an original image canvas corresponding to the camera view;
and stretching the rendered first resolution canvas and the rendered second resolution canvas and then overlapping the stretched first resolution canvas and the stretched second resolution canvas on the original image canvas.
7. An apparatus for rendering augmented reality data, comprising:
a camera field of view determination unit configured to determine a camera field of view from a field angle and a position of the camera;
a human eye visual field determining unit configured to determine a human eye visual field from a retina position and a pupil position of a wearer;
an overlapping portion determining unit configured to determine an overlapping portion of the human eye field of view and the camera field of view;
a first resolution rendering unit configured to render the augmented reality data to be rendered of the overlapped portion at a preset first resolution;
the second resolution rendering unit is configured to render the augmented reality data to be rendered, which are not the overlapped part, according to a preset second resolution; wherein the first resolution is greater than the second resolution.
8. The apparatus of claim 7, wherein the human eye field of view determination unit is further configured to:
taking the upper and lower edges of the wearer's retina as first and second starting points, respectively;
taking the pupil position of the wearer as an endpoint;
connecting the first starting point and the second starting point with the end point respectively, and extending the connecting line to obtain a human eye viewing cone;
and determining the visual field of the real scene in the coverage range of the human eye cones as the visual field of the human eyes.
9. The apparatus of claim 7, further comprising:
a human eye parameter acquisition unit configured to acquire a retinal position and a pupil position of a wearer by a human eye parameter sensor before determining a human eye visual field from the retinal and pupil positions of the wearer; wherein the eye parameter sensor is built-in to the AR glasses on a side facing the wearer.
10. The apparatus of claim 7, wherein the first resolution rendering unit is further configured to:
acquiring a data type contained in the augmented reality data to be rendered of the overlapped part;
determining a target first resolution corresponding to each type of augmented reality data to be rendered according to the corresponding table; wherein, the corresponding relation between each data type and each first resolution is pre-recorded in the corresponding table;
and rendering each type of augmented reality data to be rendered of the overlapped part according to the corresponding target first resolution.
11. The apparatus of any of claims 7 to 10, further comprising:
a first resolution canvas determination unit configured to determine a first resolution canvas according to the first resolution;
a second resolution canvas determination unit configured to determine a second resolution canvas according to the second resolution;
correspondingly, the first resolution rendering unit is further configured to:
rendering the overlapped part of augmented reality data to be rendered on the first resolution canvas according to the first resolution;
correspondingly, the second resolution rendering unit is further configured to:
and rendering the augmented reality data to be rendered, which are not the overlapped part, on the second resolution canvas according to the second resolution.
12. The apparatus of claim 11, further comprising:
an original image data acquisition unit configured to acquire original image data corresponding to the camera view field captured by the camera;
an original image data rendering unit configured to render the original image data on an original image canvas corresponding to the camera view;
and the superposition processing unit is configured to superpose the rendered first resolution canvas and the rendered second resolution canvas on the original image canvas after being stretched.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for rendering augmented reality data of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method for rendering augmented reality data of any one of claims 1-6.
CN202010368617.4A 2020-04-27 2020-04-27 Method, apparatus, device and storage medium for rendering augmented reality data Active CN111553972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368617.4A CN111553972B (en) 2020-04-27 2020-04-27 Method, apparatus, device and storage medium for rendering augmented reality data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368617.4A CN111553972B (en) 2020-04-27 2020-04-27 Method, apparatus, device and storage medium for rendering augmented reality data

Publications (2)

Publication Number Publication Date
CN111553972A true CN111553972A (en) 2020-08-18
CN111553972B CN111553972B (en) 2023-06-30

Family

ID=72007895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368617.4A Active CN111553972B (en) 2020-04-27 2020-04-27 Method, apparatus, device and storage medium for rendering augmented reality data

Country Status (1)

Country Link
CN (1) CN111553972B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113262464A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Dynamic change method and device of virtual reality scene and electronic equipment
CN114356087A (en) * 2021-12-30 2022-04-15 北京绵白糖智能科技有限公司 Interaction method, device, equipment and storage medium based on augmented reality

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470729A (en) * 2007-12-25 2009-07-01 百度在线网络技术(北京)有限公司 Web page picture displaying method and system, and server
CN102591016A (en) * 2010-12-17 2012-07-18 微软公司 Optimized focal area for augmented reality displays
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
CN105892061A (en) * 2016-06-24 2016-08-24 北京国承万通信息科技有限公司 Display device and display method
US20170124980A1 (en) * 2015-11-02 2017-05-04 Castar, Inc. Method of immersive rendering for wide field of view
CN106856009A (en) * 2015-12-09 2017-06-16 想象技术有限公司 Retina female is rendered
US20170330496A1 (en) * 2016-05-16 2017-11-16 Unity IPR ApS System and method for rendering images in virtual reality and mixed reality devices
CN107464278A (en) * 2017-09-01 2017-12-12 叠境数字科技(上海)有限公司 The spheroid light field rendering intent of full line of vision
US9965895B1 (en) * 2014-03-20 2018-05-08 A9.Com, Inc. Augmented reality Camera Lucida
CN108234479A (en) * 2017-12-29 2018-06-29 北京百度网讯科技有限公司 For handling the method and apparatus of information
US20180188543A1 (en) * 2016-12-01 2018-07-05 Varjo Technologies Oy Display apparatus and method of displaying using electromechanical faceplate
CN108421252A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of game implementation method and AR equipment based on AR equipment
CN108604388A (en) * 2015-10-17 2018-09-28 亚力维斯股份有限公司 Direct body in virtual reality and/or Augmented Reality renders
CN108665521A (en) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 Image rendering method, device, system, computer readable storage medium and equipment
CN109005285A (en) * 2018-07-04 2018-12-14 百度在线网络技术(北京)有限公司 augmented reality processing method, terminal device and storage medium
CN109167924A (en) * 2018-10-24 2019-01-08 清华-伯克利深圳学院筹备办公室 Video imaging method, system, equipment and storage medium based on Hybrid camera
CN109166170A (en) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for rendering augmented reality scene
CN109388467A (en) * 2018-09-30 2019-02-26 百度在线网络技术(北京)有限公司 Map information display method, device, computer equipment and storage medium
US20190066379A1 (en) * 2017-08-24 2019-02-28 International Business Machines Corporation Personalized Augmented Reality Using Cognitive Analysis
CN109445112A (en) * 2019-01-05 2019-03-08 西安维度视界科技有限公司 A kind of AR glasses and the augmented reality method based on AR glasses
US20190101756A1 (en) * 2017-09-29 2019-04-04 Hand Held Products, Inc. Scanning device
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality
CN109714583A (en) * 2019-01-22 2019-05-03 京东方科技集团股份有限公司 The display methods of augmented reality and the display system of augmented reality
CN109791605A (en) * 2016-08-01 2019-05-21 脸谱科技有限责任公司 Auto-adaptive parameter in image-region based on eyctracker information
CN109801353A (en) * 2019-01-16 2019-05-24 北京七鑫易维信息技术有限公司 A kind of method of image rendering, server and terminal
US20190272626A1 (en) * 2016-10-31 2019-09-05 Victoria Link Limited A rendering process and system
US20190287495A1 (en) * 2018-03-16 2019-09-19 Magic Leap, Inc. Depth based foveated rendering for display systems
CN110324601A (en) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 Rendering method, computer product and display device
CN110322818A (en) * 2018-03-29 2019-10-11 豪威科技股份有限公司 Display device and operating method
CN110378990A (en) * 2019-07-03 2019-10-25 北京悉见科技有限公司 Augmented reality scene shows method, apparatus and storage medium
CN110647860A (en) * 2019-09-29 2020-01-03 百度在线网络技术(北京)有限公司 Information rendering method, device, equipment and medium
CN110679147A (en) * 2017-03-22 2020-01-10 奇跃公司 Depth-based foveated rendering for display systems
US10559121B1 (en) * 2018-03-16 2020-02-11 Amazon Technologies, Inc. Infrared reflectivity determinations for augmented reality rendering

Patent Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470729A (en) * 2007-12-25 2009-07-01 百度在线网络技术(北京)有限公司 Web page picture displaying method and system, and server
CN102591016A (en) * 2010-12-17 2012-07-18 微软公司 Optimized focal area for augmented reality displays
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
US9965895B1 (en) * 2014-03-20 2018-05-08 A9.Com, Inc. Augmented reality Camera Lucida
CN108604388A (en) * 2015-10-17 2018-09-28 亚力维斯股份有限公司 Direct body in virtual reality and/or Augmented Reality renders
US20170124980A1 (en) * 2015-11-02 2017-05-04 Castar, Inc. Method of immersive rendering for wide field of view
CN106856009A (en) * 2015-12-09 2017-06-16 想象技术有限公司 Retina female is rendered
US20170330496A1 (en) * 2016-05-16 2017-11-16 Unity IPR ApS System and method for rendering images in virtual reality and mixed reality devices
CN105892061A (en) * 2016-06-24 2016-08-24 北京国承万通信息科技有限公司 Display device and display method
CN109791605A (en) * 2016-08-01 2019-05-21 脸谱科技有限责任公司 Auto-adaptive parameter in image-region based on eyctracker information
US20190272626A1 (en) * 2016-10-31 2019-09-05 Victoria Link Limited A rendering process and system
US20180188543A1 (en) * 2016-12-01 2018-07-05 Varjo Technologies Oy Display apparatus and method of displaying using electromechanical faceplate
CN108421252A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of game implementation method and AR equipment based on AR equipment
CN110679147A (en) * 2017-03-22 2020-01-10 奇跃公司 Depth-based foveated rendering for display systems
US20190066379A1 (en) * 2017-08-24 2019-02-28 International Business Machines Corporation Personalized Augmented Reality Using Cognitive Analysis
CN107464278A (en) * 2017-09-01 2017-12-12 叠境数字科技(上海)有限公司 The spheroid light field rendering intent of full line of vision
US20190101756A1 (en) * 2017-09-29 2019-04-04 Hand Held Products, Inc. Scanning device
CN108234479A (en) * 2017-12-29 2018-06-29 北京百度网讯科技有限公司 For handling the method and apparatus of information
US10559121B1 (en) * 2018-03-16 2020-02-11 Amazon Technologies, Inc. Infrared reflectivity determinations for augmented reality rendering
US20190287495A1 (en) * 2018-03-16 2019-09-19 Magic Leap, Inc. Depth based foveated rendering for display systems
CN110324601A (en) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 Rendering method, computer product and display device
CN110322818A (en) * 2018-03-29 2019-10-11 豪威科技股份有限公司 Display device and operating method
CN108665521A (en) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 Image rendering method, device, system, computer readable storage medium and equipment
CN109005285A (en) * 2018-07-04 2018-12-14 百度在线网络技术(北京)有限公司 augmented reality processing method, terminal device and storage medium
CN109166170A (en) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for rendering augmented reality scene
CN109615703A (en) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 Image presentation method, device and the equipment of augmented reality
CN109388467A (en) * 2018-09-30 2019-02-26 百度在线网络技术(北京)有限公司 Map information display method, device, computer equipment and storage medium
CN109167924A (en) * 2018-10-24 2019-01-08 清华-伯克利深圳学院筹备办公室 Video imaging method, system, equipment and storage medium based on Hybrid camera
CN109445112A (en) * 2019-01-05 2019-03-08 西安维度视界科技有限公司 A kind of AR glasses and the augmented reality method based on AR glasses
CN109801353A (en) * 2019-01-16 2019-05-24 北京七鑫易维信息技术有限公司 A kind of method of image rendering, server and terminal
CN109714583A (en) * 2019-01-22 2019-05-03 京东方科技集团股份有限公司 The display methods of augmented reality and the display system of augmented reality
CN110378990A (en) * 2019-07-03 2019-10-25 北京悉见科技有限公司 Augmented reality scene shows method, apparatus and storage medium
CN110647860A (en) * 2019-09-29 2020-01-03 百度在线网络技术(北京)有限公司 Information rendering method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FERRARI等: "A 3-D Mixed-Reality System for Stereoscopic Visualization of Medical Dataset.", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》, vol. 56, no. 11, pages 2627 - 2633 *
赵东阳等: "一种基于亮度和深度信息的实时景深渲染算法", 《系统仿真学报》, no. 08, pages 54 - 59 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113262464A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Dynamic change method and device of virtual reality scene and electronic equipment
US11782505B1 (en) 2021-04-21 2023-10-10 Qingdao Pico Technology Co., Ltd. Dynamic changing method and apparatus for virtual reality scene, and electronic device
CN114356087A (en) * 2021-12-30 2022-04-15 北京绵白糖智能科技有限公司 Interaction method, device, equipment and storage medium based on augmented reality

Also Published As

Publication number Publication date
CN111553972B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US20210350762A1 (en) Image processing device and image processing method
US10715791B2 (en) Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes
US10999412B2 (en) Sharing mediated reality content
CN111598818A (en) Face fusion model training method and device and electronic equipment
CN108292489A (en) Information processing unit and image generating method
WO2017208957A1 (en) Image generation device, image generation system, and image generation method
US10482666B2 (en) Display control methods and apparatuses
US20200241731A1 (en) Virtual reality vr interface generation method and apparatus
KR20130108643A (en) Systems and methods for a gaze and gesture interface
CN109840946B (en) Virtual object display method and device
EP3521978B1 (en) Apparatus and method for tracking a focal point in a head mounted display system
WO2020215960A1 (en) Method and device for determining area of gaze, and wearable device
CN111553972A (en) Method, apparatus, device and storage medium for rendering augmented reality data
KR20160060582A (en) Device and method for processing visual data, and related computer program product
CN111767110A (en) Image processing method, device, system, electronic device and storage medium
WO2022199597A1 (en) Method, apparatus and system for cropping image by vr/ar device
US11610343B2 (en) Video display control apparatus, method, and non-transitory computer readable medium
WO2020031493A1 (en) Terminal device and method for controlling terminal device
CN111857461A (en) Image display method and device, electronic equipment and readable storage medium
CN110941389A (en) Method and device for triggering AR information points by focus
CN113093901B (en) Panoramic picture display method, device and equipment
US11966278B2 (en) System and method for logging visible errors in a videogame
CN114859561B (en) Wearable display device, control method thereof and storage medium
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
KR102684302B1 (en) Method and apparatus for navigating virtual content displayed by a virtual reality (VR) device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant