CN115834863A - Virtual reality-based picture rendering method, device, equipment and program product - Google Patents

Virtual reality-based picture rendering method, device, equipment and program product Download PDF

Info

Publication number
CN115834863A
CN115834863A CN202211604263.4A CN202211604263A CN115834863A CN 115834863 A CN115834863 A CN 115834863A CN 202211604263 A CN202211604263 A CN 202211604263A CN 115834863 A CN115834863 A CN 115834863A
Authority
CN
China
Prior art keywords
rendering
data
position data
virtual
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211604263.4A
Other languages
Chinese (zh)
Inventor
蔡一新
连辉
褚文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuming Technology Hangzhou Co ltd
Original Assignee
Wuming Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuming Technology Hangzhou Co ltd filed Critical Wuming Technology Hangzhou Co ltd
Priority to CN202211604263.4A priority Critical patent/CN115834863A/en
Publication of CN115834863A publication Critical patent/CN115834863A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a picture rendering method, a picture rendering device, picture rendering equipment and a picture rendering program product based on virtual reality, and relates to the field of computers. The method comprises the following steps: acquiring pupil distance data; adjusting rendering cameras respectively corresponding to the two pupils in a virtual space to a first relative position based on the interpupillary distance data, wherein a mapping relation exists between the virtual space and an image acquisition environment of the AR device; obtaining an offset matrix, the offset matrix being camera offset data corresponding to an AR device wearer; and performing position correction on rendering cameras respectively corresponding to the double pupils by using the offset matrix, rendering the virtual object by using the corrected rendering camera based on position data of the virtual object in the virtual space, and obtaining a rendering result of the virtual object in an image acquisition environment of the AR equipment.

Description

Virtual reality-based picture rendering method, device, equipment and program product
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a program product for rendering a screen based on virtual reality.
Background
The AR (Augmented Reality) technology is a technology that superimposes Virtual information onto the real world, and even realizes transcendental Reality, and is an extension of the VR (Virtual Reality) technology to a certain extent, and relatively speaking, an AR device product has the characteristics of small volume, light weight, portability, and the like. VR (virtual Reality, mixed Reality), AR (augmented Reality) technology belongs to XR (Extended Reality) technology, and has wide application prospect.
In the related art, in an AR cloud rendering scheme, a rendering engine renders a 3D model of a detected object in an AR coordinate system according to SLAM (Simultaneous Localization and Mapping) data and object detection data returned by an algorithm, so as to map an entity object into a virtual space, thereby achieving an effect of combining virtual and real.
However, in practical applications of the above method, when both SLAM data and object detection data are proved to be correct, the visual perception of the result of model rendering in the device is still not fit to the real object, i.e. based on the visual effect, the degree of fit between the virtual object and the real object is low, and the rendering effect is poor.
Disclosure of Invention
The embodiment of the application provides a picture rendering method, a picture rendering device, picture rendering equipment and a picture rendering program product based on virtual reality, and virtual objects and entity objects obtained by rendering can be attached under a visual effect. The technical scheme is as follows.
In one aspect, a method for rendering a virtual reality-based picture is provided, where the method includes:
acquiring pupil distance data, wherein the pupil distance data are used for indicating the corresponding pupil distance of a virtual reality (AR) device wearer;
adjusting rendering cameras respectively corresponding to two pupils in a virtual space to a first relative position based on the pupil distance data, wherein the distance between the rendering cameras in the first relative position corresponds to the pupil distance data, and a mapping relation exists between the virtual space and an image acquisition environment of the AR device;
acquiring an offset matrix, wherein the offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used for performing object coincidence adjustment on rendering cameras respectively corresponding to the double pupils;
and performing position correction on the rendering cameras respectively corresponding to the double pupils by using the offset matrix, and rendering the virtual object by using the corrected rendering cameras based on the position data of the virtual object in the virtual space to obtain a rendering result of the virtual object in the image acquisition environment of the AR equipment.
In another aspect, a virtual reality-based picture rendering apparatus is provided, the apparatus including:
the pupil distance data acquisition module is used for acquiring pupil distance data, and the pupil distance data is used for indicating the pupil distance corresponding to the virtual reality AR equipment wearer;
a position adjusting module, configured to adjust, based on the interpupillary distance data, rendering cameras in a virtual space that respectively correspond to two pupils to a first relative position, where a distance between the rendering cameras in the first relative position corresponds to the interpupillary distance data, and a mapping relationship exists between the virtual space and an image acquisition environment of the AR device;
an offset matrix obtaining module, configured to obtain an offset matrix, where the offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used to perform object coincidence adjustment on rendering cameras respectively corresponding to the dual pupils;
and the rendering result acquisition module is used for correcting the positions of the rendering cameras respectively corresponding to the double pupils by using the offset matrix, rendering the virtual object by the corrected rendering camera based on the position data of the virtual object in the virtual space, and obtaining the rendering result of the virtual object in the image acquisition environment of the AR equipment.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the virtual reality based picture rendering method according to any of the embodiments of the present application.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a virtual reality based picture rendering method as described in any of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the virtual reality-based picture rendering method according to any one of the above embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through acquiring pupil distance data, and based on pupil distance data, the rendering cameras corresponding to the double pupils in the virtual space are adjusted to the first relative position, so that the distance between the rendering cameras in the first relative position corresponds to the pupil distance data, the rendering cameras can adapt to different AR equipment wearers, the rendering cameras are subjected to position correction by the acquired offset matrix, and based on the position data of the virtual object in the virtual space, the virtual object is rendered through the rendered cameras after correction, so that the rendering result of the virtual object in the image acquisition environment of the AR equipment is obtained, the rendering result is based on the visual effect of the AR equipment wearers, the virtual object is attached to the real object, the attachment degree of the virtual object and the real object is improved, and the rendering effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a virtual reality-based screen rendering method according to an exemplary embodiment of the present application;
figure 3 is a schematic diagram of pupil distance determination provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a registration calibration provided by an exemplary embodiment of the present application;
FIG. 5 is a rendering result diagram provided by an exemplary embodiment of the present application;
FIG. 6 is a second relative position adjustment flow chart provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of data validation provided by an exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating a virtual reality-based screen rendering apparatus according to an exemplary embodiment of the present disclosure;
FIG. 9 is a block diagram illustrating a virtual reality-based screen rendering apparatus module according to an exemplary embodiment of the present disclosure;
fig. 10 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
It will be understood that, although the terms first, second, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first parameter may also be referred to as a second parameter, and similarly, a second parameter may also be referred to as a first parameter, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
The AR technology is a technology that superimposes virtual information on the real world, even realizing beyond reality, and is an extension of the VR technology to a certain extent, and relatively speaking, an AR device product has the characteristics of small volume, light weight, portability, and the like. VR, AR and MR techniques belong to XR techniques and have wide application prospect. In the related art, in an AR cloud rendering scheme, a rendering engine renders a 3D model of a detected object in an AR coordinate system according to SLAM data and object detection data returned by an algorithm, so as to map an entity object into a virtual space, thereby achieving an effect of combining virtual and real. However, in practical applications, in the case that both SLAM data and object detection data prove to be correct, the visual perception of the result of model rendering in the device is still not fit to the real object, i.e. based on the visual effect, the fit degree between the virtual object and the real object is low, and the rendering effect is poor.
According to the image rendering method based on the virtual reality, the pupil distance data are acquired, the rendering cameras corresponding to the double pupils in the virtual space are adjusted to the first relative position based on the pupil distance data, the distance between the rendering cameras in the first relative position corresponds to the pupil distance data, the rendering cameras can adapt to different AR equipment wearers, the rendering cameras are subjected to position correction through the acquired offset matrix, the virtual object is rendered through the corrected rendering cameras based on the position data of the virtual object in the virtual space, the rendering result of the virtual object in the image acquisition environment of the AR equipment is obtained, the rendering result is based on the visual effect of the AR equipment wearers, the virtual object is attached to the real object, the attachment degree of the virtual object and the real object is improved, and the rendering effect is improved.
First, an environment in which the present application is implemented will be described. Referring to fig. 1, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown, where the implementation environment includes: a terminal 110.
The terminal 110 performs gesture collection on the entity object 121 in the image collection environment 120 through a built-in camera or a connected camera device, renders a virtual object 131 attached to the entity object 121 in the virtual space 130 based on the collected gesture data of the entity object 121, that is, a rendering result of the virtual object 131 in the image collection environment 120, and the image collection environment 120 and the virtual space 130 conform to a mapping relationship. The image capturing environment 120 is a three-dimensional space corresponding to the image capturing range of the camera device of the terminal 110.
The terminal is optional, and the terminal may be various types of terminal devices such as AR glasses, an AR helmet, a desktop computer, a laptop computer, a mobile phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving Picture Experts compressed standard Audio Layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, moving Picture Experts compressed standard Audio Layer 4) player, an intelligent television, and an intelligent vehicle, which is not limited in this embodiment of the present application.
It should be noted that the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as cloud service, cloud security, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, content Delivery Network (CDN), big data, and artificial intelligence platform.
The Cloud Technology (Cloud Technology) is a hosting Technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data is required to comply with relevant laws and regulations and standards in relevant countries and regions.
Next, an application scenario of the virtual reality-based screen rendering method provided in the embodiment of the present application is described with reference to the foregoing implementation environment.
1. Applied to AR game scene
Optionally, in the AR game scene, the player acquires data of an object to be rendered in the image acquisition environment by wearing an AR device, such as an AR helmet with a built-in camera device, or data acquisition devices such as AR glasses, the acquired data including object posture data and object position data, the AR device worn by the player performs a picture rendering method based on virtual reality based on the acquired data, renders a virtual object corresponding to the object to be rendered to an AR space, and makes the virtual object displayed in the rendering result completely fit with the object to be rendered in the image acquisition environment at the visual angle of the wearer of the AR device, thereby implementing an immersive game experience.
2. Applied to novel consumption scene
Optionally, in a novel consumption scene, a consumer acquires object posture data by performing posture acquisition on an appointed object through posture acquisition equipment such as a mobile phone camera, AR equipment worn by the consumer acquires the object posture data, a picture rendering method based on virtual reality is executed, a virtual posture of the appointed object wearing or equipped with an appointed commodity is rendered, the virtual posture of the appointed object wearing or equipped with the commodity is displayed in the virtual consumption scene through the AR equipment, and the appointed object can be attached to wearing or equipped effects of the appointed commodity in the view of the consumer vision, so that the consumer can remotely realize shopping experience such as cloud trial assembly.
It should be noted that the application scenarios described above are only illustrative examples, and the virtual reality-based picture rendering method provided in the embodiment of the present application may be applied to any scenario in which picture rendering is performed in an AR scenario.
Fig. 2 is a flowchart illustrating a method for rendering a virtual reality-based screen according to an exemplary embodiment of the present application, where the method may be applied to a terminal, a server, or both the terminal and the server, and the method is described as applied to the terminal in the embodiment of the present application, where as shown in fig. 2, the method includes the following steps:
and step 210, acquiring pupil distance data.
The pupil distance data are used for indicating the pupil distance corresponding to the virtual reality AR equipment wearer, namely the distance data between the right-eye pupil center and the left-eye pupil center of the AR equipment wearer.
In some embodiments, the manner of acquiring the interpupillary distance data comprises at least one of:
first, in an AR device interpupillary distance configuration mode, interpupillary distance data is manually set by an AR device wearer, so that a terminal executing a virtual reality-based picture rendering method acquires the interpupillary distance data.
In some embodiments, the manual setting mode is implemented by inputting pupil distance data through an input device of a terminal executing a virtual reality-based image rendering method, or by calibrating a pupil distance reference element provided by an AR device pupil distance configuration mode, and adjusting relevant parameters of the AR device. It should be noted that the above-mentioned manual setting manner is only an illustrative example, and the embodiment of the present application does not limit this.
Referring to fig. 3 schematically, fig. 3 is a schematic diagram for determining the pupil distance provided in an exemplary embodiment of the present application, as shown in fig. 3, in an AR space 300 displayed by an AR device, a left-eye positioning point 310 and a right-eye positioning point 320 are displayed, where the left-eye positioning point 310 and the right-eye positioning point 320 are used to represent default pupil distance data of 63 centimeters (cm) of the AR device, that is, if a left-eye pupil reference element 301 and a right-eye pupil reference element 302 are respectively overlapped with the left-eye positioning point 310 and the right-eye positioning point 320, the pupil distance data of the AR wearer is 63cm, the left-eye pupil reference element 301 and the right-eye pupil reference element 302 are used to represent an actual pupil distance of the wearer of the AR device, the left-eye pupil reference element 301 and the right-eye pupil reference element 302 are generated by the AR device by performing image acquisition on a pupil of the wearer of the AR device and based on the acquired image data, and the left-eye pupil reference element 301 and the right-eye pupil reference element 302 are points of the AR space 300 where respective two eyes of the AR device wearer face towards the AR space. The AR device wearer adjusts the device to enable the left-eye pupil reference element 301 and the right-eye pupil reference element 302 to be located on the same horizontal line with the left-eye positioning point 310 and the right-eye positioning point 320, the AR device obtains the horizontal distance between the left-eye pupil reference element 301 and the left-eye positioning point 310 and the horizontal distance between the right-eye pupil reference element 302 and the right-eye positioning point 320, and the pupil distance data of the AR device wearer is obtained through calculation based on the horizontal distances and the default pupil distance data.
And secondly, when the AR equipment is in a pupil distance measuring mode, automatically measuring the pupil distance of the AR equipment wearer through a sensor or camera equipment built in the AR equipment, thereby acquiring pupil distance data.
Schematically, the built-in camera device of the AR device performs image acquisition on both eyes of the wearer of the AR device, inputs the acquired image data into a pre-deployed pupil detection model, obtains pupil position data of both eyes of the wearer of the AR device through the output of the model, and automatically calculates pupil distance data based on the pupil position data.
It should be noted that the above-mentioned manner of acquiring the pupil distance data is only an illustrative example, and the embodiment of the present application does not limit this.
Step 220, adjusting the rendering cameras respectively corresponding to the two pupils in the virtual space to a first relative position based on the interpupillary distance data.
The distance between the rendering cameras at the first relative position corresponds to the pupil distance data, and a mapping relation exists between the virtual space and the image acquisition environment of the AR device.
In some embodiments, the distance data in virtual space of the rendering camera in the first relative position is equivalent to the interpupillary distance data.
In some embodiments, the mapping relationship existing between the virtual space and the image capturing environment of the AR device conforms to an equal scaling relationship, that is, the virtual space is a space with the same dimension as the image capturing environment obtained by adjusting the size of the image capturing environment of the AR device according to a specified ratio through the AR device and mapping. Illustratively, the virtual space is a virtual space that performs 1:1 mapping the obtained three-dimensional space.
In some embodiments, after adjusting the rendering camera to the first relative position based on the interpupillary distance data, further comprising an angle adjustment process implemented as two steps:
first step ofAnd acquiring the device angle parameter of the AR device.
In some embodiments, the device angle parameter is implemented as an included angle between the waveguide pieces of the AR glasses, the device angle parameter is stored in the AR device as a preset parameter, and the AR device can obtain the device angle parameter by direct reading.
Second step ofAnd adjusting the rendering angle of the rendering camera based on the device angle parameter.
And adjusting the angle data between the rendering cameras, wherein the adjusted angle data corresponds to the angle parameters of the equipment.
Illustratively, the included angle between the AR glasses waveguide pieces as the device angle parameter is 160 degrees, the included angle between the current rendering cameras is 180 degrees, the rendering angles of the rendering cameras are respectively adjusted by 10 degrees, the adjusted included angle between the rendering cameras is 160 degrees and is consistent with the included angle between the AR glasses waveguide pieces, and the included angle between the rendering cameras is consistent with the included angle between the AR glasses waveguide pieces in orientation.
At step 230, an offset matrix is obtained.
The offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used for performing object coincidence adjustment on the rendering cameras respectively corresponding to the double pupils.
In some embodiments, the offset matrix is determined by the AR device wearer after manual calibration, then step 230 is implemented as the following two steps:
first step ofAnd obtaining calibration data.
In some embodiments, the coincidence calibration is that the AR device wearer makes the coincidence reference element representing the virtual object coincide with the entity object corresponding to the virtual object through manual debugging.
Referring to fig. 4, fig. 4 is a schematic diagram of coincidence calibration provided in an exemplary embodiment of the present application, as shown in fig. 4, an entity chessboard 420 in an image acquisition environment is displayed in a virtual space 400, and a virtual chessboard 410 serving as a coincidence reference element in the virtual space, an AR device wearer performs position calibration on the virtual chessboard 410 through manual debugging, when the AR device wearer observes that the virtual chessboard 410 and the entity chessboard 420 coincide through an AR device, and the AR device obtains calibration data based on a calibration result to obtain a corresponding offset parameter.
Second step ofAn offset matrix is obtained based on the calibration data.
Optionally, the manner of obtaining the offset matrix based on the calibration data includes at least one of the following manners:
firstly, inputting calibration data into a preset offset model, and outputting to obtain an offset matrix.
Second, an offset matrix is determined from an offset look-up table based on the calibration data.
The offset comparison table comprises the corresponding relation between the calibration data and the offset matrix.
In some embodiments, the offset matrix is obtained by matching a matrix corresponding to the calibration data in the offset lookup table.
Thirdly, mapping the calibration data through a preset algorithm to obtain an offset matrix.
Wherein the preset algorithm is determined based on the mapping relationship.
In some embodiments, a movement matrix is preset for adjusting the position of the rendering camera, so that the position of the rendering camera in the virtual space and the position of the AR device in the image acquisition environment conform to the mapping relationship between the virtual space and the image acquisition environment, and the preset algorithm is implemented by multiplying the offset parameter represented by the calibration data by the movement matrix to obtain the offset matrix.
And 240, performing position correction on the rendering cameras respectively corresponding to the double pupils by using the offset matrix, and rendering the virtual object by using the corrected rendering cameras based on the position data of the virtual object in the virtual space to obtain a rendering result of the virtual object in the image acquisition environment of the AR device.
In some embodiments, the rendering camera position data respectively corresponding to the two pupils, that is, the first relative position coordinate data respectively corresponding to the two pupils, is multiplied by the offset matrix to obtain corrected position data capable of representing the corrected position of the rendering camera, and the rendering camera is moved to the position corresponding to the corrected position data in the virtual space.
In some embodiments, object pose data of an entity object is acquired by an AR device and converted to obtain pose data of a virtual object corresponding to the entity object, position data of the virtual object corresponding to the entity object in a virtual space is acquired by an object detection algorithm and used as virtual object position data, and the object pose data of the virtual object is rendered by a rendering camera after correction at a position represented by the virtual object position data in the virtual space to obtain a rendering result of the virtual object in an image acquisition environment of the AR device, that is, the virtual object completely coincides with the entity object in the image acquisition environment at a visual angle of a wearer of the AR device.
Referring to fig. 5 schematically, fig. 5 is a schematic diagram of rendering results provided in an exemplary embodiment of the present application, as shown in fig. 5, an image capturing environment 510 of an AR device includes an entity backboard 511 and an entity basketball 512, the posture data of the entity backboard 511 is captured by the AR device and converted to obtain posture data of a virtual backboard 521 corresponding to the entity backboard 511, the entity basketball 512 is used as an entity background element in the image capturing environment 510, position data of the virtual backboard 521 in a virtual space 520 is obtained by an object detection algorithm as virtual object position data, at a position represented by the virtual object data in the virtual space 520, the object posture data of the virtual backboard 521 is rendered by a corrected rendering camera to obtain a rendering result of the virtual backboard in the image capturing environment 510 of the AR device, that is a visual angle of a wearer of the AR device, the virtual backboard 521 completely coincides with the entity backboard 511 in the image capturing environment 510, and the entity background element in the image capturing environment 510, that is the mapping relationship between the entity backboard 521 and the virtual backboard 521 is consistent with the position of the virtual backboard 511, the image capturing environment 510, that is displayed in the image capturing environment 512.
To sum up, the method provided by the embodiment of the present application adjusts, based on the interpupillary distance data, the rendering cameras corresponding to the two pupils in the virtual space to the first relative position, so that the distance between the rendering cameras at the first relative position corresponds to the interpupillary distance data, and the rendering cameras can adapt to different AR device wearers, so that the rendering cameras are subjected to position correction based on the acquired offset matrix, and the virtual object is rendered through the rendered camera after correction based on the position data of the virtual object in the virtual space, thereby obtaining the rendering result of the virtual object in the image acquisition environment of the AR device, so that the rendering result is based on the visual effect of the AR device wearer, thereby achieving the attachment with the real object, improving the attachment degree of the virtual object and the real object, and improving the rendering effect.
The method provided by the embodiment of the application defines a method for acquiring the offset matrix through the calibration data, ensures that the rendering result can be attached to the real object based on the visual effect of the AR equipment wearer, improves the attaching degree of the virtual object and the real object, and improves the rendering effect.
The method provided by the embodiment of the application provides various offset matrix determination modes, and improves the accuracy of converting the calibration data into the offset matrix, so that the fitting degree of the virtual object and the real object is improved.
The method provided by the embodiment of the application provides a scheme for adjusting the rendering angle of the rendering camera based on the angle parameter of the equipment, so that the adjusted included angle data between the rendering cameras corresponds to the angle parameter of the equipment, and the adaptability of AR equipment with different angle parameters of the equipment is improved.
In some embodiments, in order to adapt to the moving process existing in the above-mentioned virtual reality-based screen rendering method, before obtaining the offset matrix, a rendering camera moving method is further included, please refer to fig. 6, fig. 6 is a flow chart of adjusting a relative position of a second object according to an exemplary embodiment of the present application, and as shown in fig. 6, the method includes the following steps:
step 610, obtain a movement matrix.
The movement matrix is used for positioning and moving the rendering cameras respectively corresponding to the double pupils, so that the position data of the rendering cameras in the virtual space and the position data of the AR equipment in the image acquisition environment are in accordance with the mapping relation between the virtual space and the image acquisition environment of the AR equipment.
In some embodiments, the movement matrix is a 4 × 4 transformation matrix obtained by using a SLAM algorithm, the rendering camera is positioned and moved in real time, when the AR device moves in the image acquisition environment, position data of the rendering camera in the virtual space corresponding to the position before the movement, that is, coordinate data of the rendering camera before the movement, is multiplied by the movement matrix to obtain position data of the rendering camera after the movement in the virtual space, so that the position data of the rendering camera in the virtual space and the position data of the AR device in the image acquisition environment always conform to a mapping relationship between the virtual space and the image acquisition environment of the AR device.
The SLAM algorithm is used for generating position data of a corresponding rendering camera in a virtual space on the basis of the positioning data in real time through positioning of the AR equipment in an image acquisition environment, and completing a mapping process from the image acquisition environment to the virtual space, namely, a coordinate system of the image acquisition environment is mapped to a coordinate system of the virtual space, a corresponding transformation matrix is generated on the basis of a mapping relation in the mapping process, and coordinate data of a certain position in the image acquisition environment are multiplied by the corresponding transformation matrix to obtain the corresponding coordinate data of the position in the virtual space.
Step 620, based on the movement matrix, adjusting the rendering cameras respectively corresponding to the two pupils to a second relative position.
And the second relative position and the position of the AR equipment in the image acquisition environment accord with the mapping relation between the virtual space and the image acquisition environment of the AR equipment.
Schematically, first coordinate data of a first relative position of a rendering camera in a virtual space, which correspond to the two pupils respectively, are acquired, the first coordinate data is multiplied by a moving matrix to obtain second coordinate data of a second relative position of the rendering camera in the virtual space, the rendering camera is moved to the second relative position, which is represented by the second coordinate data in the virtual space, and the second relative position of the rendering camera and the position of the current AR device in the image acquisition environment conform to a mapping relation.
To sum up, the method provided by the embodiment of the application makes clear the scheme of adjusting the rendering camera to the second relative position based on the mobile matrix, and in the process that the AR device moves in the image acquisition environment, the rendering camera correspondingly moves in the virtual space, so that the position data of the rendering camera in the virtual space and the position data of the AR device in the image acquisition environment all the time are ensured to conform to the mapping relation between the virtual space and the image acquisition environment, a precondition is provided for the complete fitting of the virtual object and the real object, and the accuracy of the fitting of the virtual object and the real object is improved.
Referring to fig. 7, fig. 7 is a schematic data verification diagram provided in an exemplary embodiment of the present application, and as shown in fig. 7, in some embodiments, the above-mentioned picture rendering method based on virtual reality is performed based on a case that rendering camera position data and virtual object position data meet a verification condition, that is, step 700, and pupil distance data is obtained when the rendering camera position data and the virtual object position data meet the verification condition.
The rendering camera position data is position data of the rendering camera in the virtual space, the virtual object position data is position data of the virtual object in the virtual space, and the verification condition is used for indicating the corresponding degree between the rendering camera position data and the virtual object position data and the mapping relation.
In some embodiments, the rendering camera position data is data acquired by a SLAM algorithm, and the virtual object position data is data acquired by an object detection algorithm.
As shown in fig. 7, in some embodiments, step 700 includes a rendering camera position data verification portion 710 and a virtual object position data verification portion 720, rendering camera position data verification portion 710 includes steps 711 to 712, and virtual object position data verification portion 720 includes steps 721 to 722.
And 711, acquiring the rendering camera position data and the equipment position data, verifying the rendering camera position data, and determining that the rendering camera position data meets the verification condition when the rendering camera position data and the equipment position data meet the mapping relation.
Wherein the device location data is location data of the AR device in the image acquisition environment.
In some embodiments, a mapping relationship table is preset, where the mapping relationship table includes a corresponding relationship between virtual space position data and image acquisition environment position data, and it is verified whether rendering camera position data and device position data conform to the corresponding relationship in the mapping relationship table according to the mapping relationship table, and if so, it is determined that rendering camera position data and device position data conform to the mapping relationship, and it is determined that rendering camera position data conforms to the verification condition.
In some embodiments, when the rendering camera position data is data acquired by a SLAM algorithm, corresponding rendering camera position simulation data is acquired by the SLAM algorithm based on the device position data, and in the case that the rendering camera position simulation data is consistent with the rendering camera position data, the rendering camera position data and the device position data conform to a mapping relationship, and it is determined that the rendering camera position data conforms to a verification condition.
In some embodiments, an apparatus mapping matrix is preset, the apparatus position data can be mapped to obtain corresponding rendering camera position data, the obtained apparatus position data is multiplied by the apparatus mapping matrix to obtain apparatus mapping position data, when the apparatus mapping position data is consistent with the obtained rendering camera position, the rendering camera position data and the apparatus position data conform to a mapping relationship, and it is determined that the rendering camera position data conforms to a verification condition.
And 712, acquiring the pupil distance data under the condition that the rendering camera position data meets the verification condition.
The mode of obtaining the interpupillary distance comprises a manual setting mode of an AR device wearer and an automatic measurement mode of the AR device.
And step 721, acquiring the position data of the virtual object and the position data of the object acquisition, verifying the position data of the virtual object, and determining that the position data of the virtual object meets the verification condition when the position data of the virtual object and the position data of the object acquisition meet the mapping relation.
The object acquisition position data is position data of a solid object corresponding to the virtual object in the image acquisition environment.
In some embodiments, a mapping relationship table is preset, which includes a corresponding relationship between the virtual space position data and the image acquisition environment position data, and whether the virtual object position data and the object acquisition position data conform to the corresponding relationship in the mapping relationship table is verified according to the mapping relationship table, if so, it is determined that the virtual object position data and the object acquisition position data conform to the mapping relationship, and the virtual object position data conform to the verification condition is determined.
In some embodiments, when the virtual object position data is data obtained by an object detection algorithm, corresponding virtual object position simulation data is obtained by the object detection algorithm based on the object collection position data, and when the virtual object position simulation data is consistent with the obtained virtual object position data, the virtual object position data and the object collection position data conform to a mapping relationship, and it is determined that the virtual object position data conforms to a verification condition.
In some embodiments, an object mapping matrix is preset, the object acquisition position data can be mapped to obtain corresponding virtual object position data, the obtained object acquisition position data is multiplied by the object mapping matrix to obtain object acquisition mapping position data, when the object acquisition mapping position data is consistent with the obtained virtual object position, the virtual object position data and the object acquisition position data conform to a mapping relation, and it is determined that the virtual object position data conforms to a verification condition.
And step 722, acquiring pupil distance data when the virtual object position data meets the verification condition.
The mode of obtaining the interpupillary distance comprises a manual setting mode of an AR device wearer and an automatic measurement mode of the AR device.
To sum up, the method provided by the embodiment of the present application defines a way of acquiring interpupillary distance data when rendering camera position data and virtual object position data meet verification conditions, defines execution conditions of a virtual reality-based image rendering method, eliminates a non-attachment phenomenon caused by data errors, improves the accuracy of the method, and improves the attachment degree.
According to the method provided by the embodiment of the application, the mode of acquiring the pupil distance data is determined under the condition that the rendering camera position data accords with the verification condition, the phenomenon of non-sticking caused by the error of the rendering camera position data is eliminated, and the rendering accuracy is improved.
The method provided by the embodiment of the application defines the mode of acquiring the interpupillary distance data under the condition that the virtual object position data accords with the verification condition, eliminates the phenomenon of non-lamination caused by virtual object position data errors, and improves the rendering accuracy.
Fig. 8 is a block diagram illustrating a virtual reality-based screen rendering apparatus according to an exemplary embodiment of the present application, where as shown in fig. 8, the apparatus includes the following components:
the interpupillary distance data acquisition module 810 is configured to acquire interpupillary distance data, where the interpupillary distance data is used to indicate a corresponding interpupillary distance of a virtual reality (AR) device wearer;
a position adjusting module 820, configured to adjust rendering cameras respectively corresponding to two pupils in a virtual space to a first relative position based on the interpupillary distance data, where a distance between the rendering cameras in the first relative position corresponds to the interpupillary distance data, and a mapping relationship exists between the virtual space and an image acquisition environment of the AR device;
an offset matrix obtaining module 830, configured to obtain an offset matrix, where the offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used to perform object coincidence adjustment on rendering cameras respectively corresponding to the two pupils;
a rendering result obtaining module 840, configured to perform position correction on the rendering cameras respectively corresponding to the two pupils by using the offset matrix, and render the virtual object through the corrected rendering camera based on position data of the virtual object in the virtual space, so as to obtain a rendering result of the virtual object in the image acquisition environment of the AR device.
Referring to fig. 9, fig. 9 is a block diagram illustrating a structure of a virtual reality-based screen rendering apparatus module according to an exemplary embodiment of the present application, and as shown in fig. 9, in some embodiments, the offset matrix obtaining module 830 includes:
a calibration data obtaining unit 831, configured to obtain calibration data, where the calibration data is an offset parameter obtained after a coincidence calibration of a coincidence reference element in the virtual space is performed by the AR device wearer;
an offset matrix obtaining unit 832, configured to obtain the offset matrix based on the calibration data.
In some embodiments, the offset matrix obtaining unit 812 is configured to input the calibration data into a preset offset model, and output the calibration data to obtain the offset matrix; or determining the offset matrix through an offset comparison table based on the calibration data, wherein the offset comparison table comprises the corresponding relation between the calibration data and the offset matrix; or mapping the calibration data to obtain the offset matrix through a preset algorithm, wherein the preset algorithm is determined based on the mapping relation.
In some embodiments, the apparatus further comprises:
an angle parameter acquiring module 850, configured to acquire a device angle parameter of the AR device;
a rendering angle adjusting module 860, configured to adjust the rendering angle of the rendering cameras based on the device angle parameter, where the adjusted included angle data between the rendering cameras corresponds to the device angle parameter.
In some embodiments, the apparatus further comprises:
a moving matrix obtaining module 870, configured to obtain a moving matrix, where the moving matrix is camera moving data corresponding to a moving process of the AR device, and the moving matrix is used to perform positioning movement on rendering cameras respectively corresponding to two pupils, so that position data of the rendering cameras in the virtual space and position data of the AR device in the image acquisition environment conform to a mapping relationship between the virtual space and the image acquisition environment of the AR device;
a second relative position adjusting module 880, configured to adjust, based on the movement matrix, the rendering cameras respectively corresponding to the two pupils to a second relative position, where the second relative position and the position of the AR device in the image acquisition environment conform to a mapping relationship between the virtual space and the image acquisition environment of the AR device.
In some embodiments, the interpupillary distance data obtaining module 810 includes an interpupillary distance data obtaining unit 811 configured to obtain the interpupillary distance data if rendering camera position data and virtual object position data meet a verification condition, the rendering camera position data being position data of the rendering camera in the virtual space, the virtual object position data being position data of the virtual object in the virtual space, the verification condition being configured to indicate a degree of correspondence between the rendering camera position data and the virtual object position data and the mapping relationship.
In some embodiments, the interpupillary distance data acquiring unit 811 is configured to acquire the rendering camera position data and the device position data, verify the rendering camera position data, and determine that the rendering camera position data meets the verification condition when the rendering camera position data and the device position data meet the mapping relationship, where the device position data is position data of the AR device in the image acquisition environment; acquiring the interpupillary distance data when the rendering camera position data meets the verification condition.
In some embodiments, the interpupillary distance data acquiring unit 811 is configured to acquire the virtual object position data and the object acquisition position data, verify the virtual object position data, and determine that the virtual object position data meets the verification condition when the virtual object position data and the object acquisition position data meet the mapping relationship, where the object acquisition position data is position data of an entity object corresponding to the virtual object in the image acquisition environment; and acquiring the pupil distance data under the condition that the virtual object position data meets the verification condition.
To sum up, the device that this application embodiment provided is through acquireing interpupillary distance data, and adjust the rendering camera that corresponds respectively with two pupils in the virtual space to first relative position based on interpupillary distance data, make the distance between the rendering camera under the first relative position correspond with interpupillary distance data, make the rendering camera can adapt to different AR equipment wearers, carry out position correction to the rendering camera with the skew matrix who acquires, and based on the position data of virtual object in the virtual space, render the virtual object through the rendering camera after the correction, obtain the rendering result of virtual object in the image acquisition environment of AR equipment, make the rendering result based on AR equipment wearers's visual effect, realize the laminating with real object, the laminating degree of virtual object and real object has been improved, rendering effect has been improved.
It should be noted that: the virtual reality-based screen rendering apparatus provided in the above embodiment is only illustrated by the division of the functional modules, and in practical applications, the function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
Fig. 10 shows a block diagram of a terminal 1000 according to an exemplary embodiment of the present application. The terminal 1000 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.
In general, terminal 1000 can include: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for execution by processor 1001 to implement a virtual reality based screen rendering method provided by method embodiments herein.
In some embodiments, terminal 1000 can include other components, and those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and can include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
Embodiments of the present application also provide a computer device, which may be implemented as a terminal or a server as shown in fig. 1. The computer device comprises a processor and a memory, wherein at least one instruction, at least one program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to realize the virtual reality based picture rendering method provided by the method embodiments.
Embodiments of the present application further provide a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored on the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the virtual reality-based picture rendering method provided by the foregoing method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the virtual reality-based picture rendering method according to any one of the above embodiments.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A picture rendering method based on virtual reality is characterized by comprising the following steps:
acquiring pupil distance data, wherein the pupil distance data are used for indicating the corresponding pupil distance of a virtual reality (AR) device wearer;
adjusting rendering cameras respectively corresponding to two pupils in a virtual space to a first relative position based on the pupil distance data, wherein the distance between the rendering cameras in the first relative position corresponds to the pupil distance data, and a mapping relation exists between the virtual space and an image acquisition environment of the AR device;
acquiring an offset matrix, wherein the offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used for performing object coincidence adjustment on rendering cameras respectively corresponding to the double pupils;
and performing position correction on the rendering cameras respectively corresponding to the double pupils by using the offset matrix, and rendering the virtual object by using the corrected rendering cameras based on the position data of the virtual object in the virtual space to obtain a rendering result of the virtual object in the image acquisition environment of the AR equipment.
2. The method of claim 1, wherein obtaining the offset matrix comprises:
acquiring calibration data, wherein the calibration data is an offset parameter obtained after the AR equipment wearer performs coincidence calibration on coincidence reference elements in the virtual space;
obtaining the offset matrix based on the calibration data.
3. The method of claim 2, wherein said obtaining the offset matrix based on the calibration data comprises:
inputting the calibration data into a preset offset model, and outputting to obtain the offset matrix; or,
determining the offset matrix through an offset comparison table based on the calibration data, wherein the offset comparison table comprises the corresponding relation between the calibration data and the offset matrix; or,
and mapping the calibration data to obtain the offset matrix through a preset algorithm, wherein the preset algorithm is determined based on the mapping relation.
4. The method of claim 1, wherein after adjusting the rendering cameras in the virtual space corresponding to the dual pupils to the first relative position based on the interpupillary distance data, further comprising:
acquiring a device angle parameter of the AR device;
and adjusting the rendering angle of the rendering cameras based on the equipment angle parameters, wherein the adjusted included angle data between the rendering cameras correspond to the equipment angle parameters.
5. The method of claim 1, wherein before obtaining the offset matrix, further comprising:
acquiring a moving matrix, wherein the moving matrix is camera moving data corresponding to the AR device in the moving process, and the moving matrix is used for positioning and moving rendering cameras respectively corresponding to the double pupils, so that the position data of the rendering cameras in the virtual space and the position data of the AR device in the image acquisition environment are in accordance with the mapping relation between the virtual space and the image acquisition environment of the AR device;
based on the moving matrix, adjusting the rendering cameras respectively corresponding to the two pupils to a second relative position, wherein the second relative position and the position of the AR device in the image acquisition environment are in accordance with the mapping relation between the virtual space and the image acquisition environment of the AR device.
6. The method of any of claims 1 to 5, wherein said acquiring interpupillary distance data comprises:
acquiring the interpupillary distance data in a case where rendering camera position data and virtual object position data meet a verification condition, the rendering camera position data being position data of the rendering camera in the virtual space, the virtual object position data being position data of the virtual object in the virtual space, the verification condition indicating a degree of correspondence between the rendering camera position data and the virtual object position data and the mapping relationship.
7. The method of claim 6, wherein obtaining the interpupillary distance data in the event that the rendering camera position data and virtual object position data meet validation criteria comprises:
obtaining the rendering camera position data and device position data, verifying the rendering camera position data, and determining that the rendering camera position data meets the verification condition when the rendering camera position data and the device position data meet the mapping relationship, wherein the device position data is position data of the AR device in the image acquisition environment;
acquiring the interpupillary distance data when the rendering camera position data meets the verification condition.
8. The method of claim 6, wherein the obtaining the interpupillary distance data in the event that the rendering camera position data and the virtual object position data meet validation criteria comprises:
acquiring the virtual object position data and object acquisition position data, verifying the virtual object position data, and determining that the virtual object position data meets the verification condition when the virtual object position data and the object acquisition position data meet the mapping relation, wherein the object acquisition position data is position data of an entity object corresponding to the virtual object in the image acquisition environment;
and acquiring the pupil distance data under the condition that the virtual object position data meets the verification condition.
9. A virtual reality-based picture rendering apparatus, comprising:
the pupil distance data acquisition module is used for acquiring pupil distance data, and the pupil distance data is used for indicating the pupil distance corresponding to the virtual reality AR equipment wearer;
a position adjusting module, configured to adjust, based on the interpupillary distance data, rendering cameras in a virtual space that respectively correspond to two pupils to a first relative position, where a distance between the rendering cameras in the first relative position corresponds to the interpupillary distance data, and a mapping relationship exists between the virtual space and an image acquisition environment of the AR device;
an offset matrix obtaining module, configured to obtain an offset matrix, where the offset matrix is camera offset data corresponding to the AR device wearer, and the offset matrix is used to perform object coincidence adjustment on rendering cameras respectively corresponding to the dual pupils;
and the rendering result acquisition module is used for correcting the positions of the rendering cameras respectively corresponding to the double pupils by using the offset matrix, and rendering the virtual object by the corrected rendering cameras based on the position data of the virtual object in the virtual space to obtain the rendering result of the virtual object in the image acquisition environment of the AR equipment.
10. A computer device comprising a processor and a memory, wherein the memory stores at least one program, and the at least one program is loaded and executed by the processor to implement the method for rendering a virtual reality based screen according to any one of claims 1 to 8.
11. A computer-readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is loaded and executed by a processor to implement the virtual reality-based picture rendering method according to any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements a method for virtual reality based screen rendering according to any one of claims 1 to 8.
CN202211604263.4A 2022-12-13 2022-12-13 Virtual reality-based picture rendering method, device, equipment and program product Pending CN115834863A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211604263.4A CN115834863A (en) 2022-12-13 2022-12-13 Virtual reality-based picture rendering method, device, equipment and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211604263.4A CN115834863A (en) 2022-12-13 2022-12-13 Virtual reality-based picture rendering method, device, equipment and program product

Publications (1)

Publication Number Publication Date
CN115834863A true CN115834863A (en) 2023-03-21

Family

ID=85547198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211604263.4A Pending CN115834863A (en) 2022-12-13 2022-12-13 Virtual reality-based picture rendering method, device, equipment and program product

Country Status (1)

Country Link
CN (1) CN115834863A (en)

Similar Documents

Publication Publication Date Title
WO2020207191A1 (en) Method and apparatus for determining occluded area of virtual object, and terminal device
EP3466070B1 (en) Method and device for obtaining image, and recording medium thereof
CN107223269B (en) Three-dimensional scene positioning method and device
EP3614340A1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
CN104346612B (en) Information processing unit and display methods
US11423518B2 (en) Method and device of correcting image distortion, display device, computer readable medium, electronic device
CN111311757B (en) Scene synthesis method and device, storage medium and mobile terminal
CN110914873A (en) Augmented reality method, device, mixed reality glasses and storage medium
CN112598686A (en) Image segmentation method and device, computer equipment and storage medium
US11380063B2 (en) Three-dimensional distortion display method, terminal device, and storage medium
CN109685907A (en) Image combination method and system based on augmented reality
US11657478B1 (en) Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision
CN111308707A (en) Picture display adjusting method and device, storage medium and augmented reality display equipment
CN108764135B (en) Image generation method and device and electronic equipment
CN111654688B (en) Method and equipment for acquiring target control parameters
CN113838217A (en) Information display method and device, electronic equipment and readable storage medium
CN109816791B (en) Method and apparatus for generating information
CN115834863A (en) Virtual reality-based picture rendering method, device, equipment and program product
CN108765321A (en) It takes pictures restorative procedure, device, storage medium and terminal device
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN114663615A (en) Electronic map display method and device and electronic equipment
CN116109531A (en) Image processing method, device, computer equipment and storage medium
US20170186218A1 (en) Method for loading 360 degree images, a loading module and mobile terminal
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination