CN117523139A - Image processing method, device, system, electronic equipment and storage medium - Google Patents

Image processing method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117523139A
CN117523139A CN202210899214.1A CN202210899214A CN117523139A CN 117523139 A CN117523139 A CN 117523139A CN 202210899214 A CN202210899214 A CN 202210899214A CN 117523139 A CN117523139 A CN 117523139A
Authority
CN
China
Prior art keywords
pixel
information
image
displayed
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210899214.1A
Other languages
Chinese (zh)
Inventor
弓殷强
高飞
郭俊佳
李屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Appotronics Corp Ltd
Original Assignee
Appotronics Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appotronics Corp Ltd filed Critical Appotronics Corp Ltd
Priority to CN202210899214.1A priority Critical patent/CN117523139A/en
Publication of CN117523139A publication Critical patent/CN117523139A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The application discloses an image processing method, an image processing device, an image processing system, electronic equipment and a storage medium, and relates to the technical field of image display. The method is applied to the electronic equipment and comprises the steps of obtaining an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene. Therefore, the electronic equipment combines the motion information and the depth information of the electronic equipment, corrects the position of each pixel in the image to be displayed, compensates the real-scene anchoring error caused by motion within the preset delay time, namely improves the accuracy of virtual-real combination, and improves the use experience of users.

Description

Image processing method, device, system, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image display technologies, and in particular, to an image processing method, an image processing device, an image processing system, an electronic device, and a storage medium.
Background
The augmented reality (Augmented Reality, AR) technology is a technology for skillfully fusing virtual information with a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registering, intelligent interaction, sensing and the like are widely applied, after the virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated, the virtual information is applied to the real world, and the two kinds of information are mutually supplemented, so that the 'enhancement' of the real world is realized, and the virtual information can be perceived by human senses in the process, so that sense experience exceeding the reality is realized.
However, for the AR device, the delay problem is difficult to avoid, which causes a drift between the final display position and the target position of the virtual object during the movement of the AR device, which affects the effect of virtual-real combination and further affects the user experience of the AR device.
Disclosure of Invention
In view of this, the present application proposes an image processing method, apparatus, system, electronic device, and storage medium.
In a first aspect, an embodiment of the present application provides an image processing method, applied to an electronic device, where the method includes: acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene.
In a second aspect, an embodiment of the present application provides an image processing apparatus, applied to an electronic device, including: an information acquisition unit, a position determination unit, a prediction unit, and an image correction unit. The information acquisition module is used for acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment; the position determining module is used for determining the current position information of each pixel in the image to be displayed based on the depth information; the prediction module is used for predicting the position change information of each pixel in a preset delay time according to the motion information; and the image correction module is used for correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, and the target display image is used for being displayed in a superimposed manner in a real scene.
In a third aspect, an embodiment of the present application provides an image processing system applied to an electronic device, where the system includes: the system comprises a motion information acquisition module, an image source module, an image processing module and a display module. The motion information acquisition module is used for acquiring motion information of the electronic equipment at the current moment and transmitting the motion information to the image processing module; the image source module is used for transmitting an image to be displayed at the current moment and depth information of the image to be displayed to the image processing module; the image processing module is used for receiving the motion information of the electronic equipment at the current moment, which is transmitted by the motion information acquisition module, and the image to be displayed and the depth information of the image to be displayed, which are transmitted by the image source module; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image; transmitting the target display image to the display module; and the display module receives the target display image and displays the target display image in a superposition manner in a real scene.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, the program code being callable by a processor to perform the method described above.
In the scheme provided by the application, an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment are obtained; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene. Therefore, the electronic equipment combines the motion information of the electronic equipment and the depth information of the image to be displayed, and corrects the position of each pixel in the image to be displayed, so that the correction of drift of the image which is overlapped and displayed in the real scene and is caused by the motion of the electronic equipment in the preset delay time is realized, namely, the real scene anchoring error caused by the display delay is compensated, namely, the accuracy of virtual-real combination is improved, the visual immersion sense and the visual reality sense of a user are enhanced, and the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a system architecture diagram of an image display system of an image processing method according to an embodiment of the present application.
Fig. 2 is a flow chart illustrating an image processing method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of an image processing method according to another embodiment of the present application.
Fig. 4 shows a schematic flow chart of the sub-steps of step S320 in fig. 3 in an embodiment.
Fig. 5 shows a schematic flow chart of the substep of step S330 in fig. 3 in an embodiment.
Fig. 6 shows a schematic flow chart of the sub-step of step S350 in fig. 3 in an embodiment.
Fig. 7 is a flowchart illustrating an image processing method according to still another embodiment of the present application.
Fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 9 is a block diagram of an electronic device for performing an image processing method according to an embodiment of the present application.
Fig. 10 is a storage unit for storing or carrying program codes for implementing the image processing method according to the embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application.
The augmented reality (Augmented Reality, AR) technology is a technology for skillfully fusing virtual information with a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registering, intelligent interaction, sensing and the like are widely applied, after the virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated, the virtual information is applied to the real world, and the two kinds of information are mutually supplemented, so that the 'enhancement' of the real world is realized, and the virtual information can be perceived by human senses in the process, so that sense experience exceeding the reality is realized.
However, for the AR device, the delay problem is difficult to avoid, which causes a drift between the final display position and the target position of the virtual object during the movement of the AR device, which affects the effect of virtual-real combination and further affects the user experience of the AR device.
In view of the above problems, the present inventors propose an image processing method, an apparatus, an electronic device, and a storage medium, which combine motion information of the electronic device and depth information of an image to be displayed, correct a display position of each pixel in the image to be displayed, that is, implement motion compensation, thereby obtaining a target display image, and superimpose and display the target display image in a real scene. This will be described in detail below.
Referring to fig. 1, fig. 1 is a block diagram of an image display system 10 according to an embodiment of the present application. The system 10 may include, among other things, an image processing module 101, an image source module 102, a motion information acquisition module 103, and a display module 104. The image source module 102 may be configured to provide the image to be displayed and the depth information of the image to be displayed to the image processing module 101, where the image source may be configured to transmit the depth map of the image to be displayed, or directly transmit the depth value of each pixel in the image to be displayed, which is not limited in this embodiment. The motion information obtaining module 103 is configured to obtain motion information of the electronic device at a current time and transmit the motion information to the image processing module, where the motion information obtaining module 103 may obtain the motion information of the electronic device through positioning technologies such as pure visual synchronous positioning and mapping (Simultaneous Localization and Mapping, SLAM) (which refers to synchronous positioning and mapping implemented by using only one or more cameras, for example, ORB-SLAM), visual inertial SLAM (which refers to synchronous positioning and mapping implemented by using one or more cameras with an inertial measurement unit, for example, VINS), laser radar SLAM (which refers to synchronous positioning and mapping implemented by using a laser radar, or synchronous positioning and mapping implemented by using a laser radar with inertia, for example, LOAM), or fixed base station positioning, which is not limited in this embodiment.
Optionally, the image processing module 101 may be configured to perform image processing on the image to be displayed according to the acquired motion information, the image to be displayed, and the depth information of the image to be displayed, that is, determine, based on the depth information, current position information of each pixel in the image to be displayed; predicting the position change information of each pixel in a preset delay time according to the motion information; and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, and outputting the final target display image to the display module 104. Further, the display module 104 is configured to superimpose and display the target display image in a real scene, so as to project the target display image into human eyes.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, which is applied to an electronic device. The image processing method provided in the embodiment of the present application will be described in detail with reference to fig. 2. The image processing method may include the steps of:
step S210: and acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment.
In this embodiment, the electronic device may be an augmented Reality (Augmented Reality, AR) glasses or a Mixed Reality (MR) glasses, but may also be other devices that can support AR technology or MR technology, for example, smart phones, tablet computers, notebook computers, etc., which is not limited in this embodiment.
The image to be displayed is an image which is required to be displayed by the electronic equipment at the current moment, and the image comprises a virtual object; in AR technology or MR technology, in order to enhance visual immersion and visual realism of a user, a scene displayed by an electronic device is generally a three-dimensional scene, and correspondingly, an image to be displayed is also displayed in a three-dimensional form. Therefore, the electronic device may also acquire depth information of the image to be displayed, where the depth information may be understood as a depth of each pixel in the image to be displayed currently. In practical applications, the electronic device is generally in a motion state, for example, when the user wears the AR glasses to play the AR game, the head is generally not prohibited from moving, and it is understood that the AR glasses rotate along with the rotation of the head of the user. Therefore, the motion information of the electronic equipment at the current moment can be acquired, and the virtual object is displayed in a superimposed manner by combining the motion information of the electronic equipment at the current moment, so that the accuracy of the electronic equipment in displaying the position of the virtual object is improved. The motion information may be a moving speed and/or an angular speed of the electronic device, which is not limited in this embodiment.
Step S220: and determining the current position information of each pixel in the image to be displayed based on the depth information.
In this embodiment, the current position information may be understood as current spatial position information of each pixel in the three-dimensional scene displayed by the electronic device, that is, the current position information is three-dimensional position information. It can be understood that the image to be displayed only includes the position of each pixel in the two-dimensional plane space, and if the current spatial position information of each pixel in the three-dimensional scene finally displayed by the electronic device is to be obtained, the current position information of each pixel in the image to be displayed can be determined by combining the depth of each pixel in the depth information.
Step S230: and predicting the position change information of each pixel in the preset delay time according to the motion information.
In this embodiment, when the electronic device obtains the image to be displayed, there is a certain delay between when the electronic device actually and successfully displays the image to be displayed, where the delay is the preset delay duration, and the preset delay duration may be preset, for example, 100 ms or 1 s, which is not limited in this embodiment. It can be understood that, if the electronic device moves within the preset delay time, the current viewing angle of the electronic device is different from the viewing angle when the electronic device obtains the image to be displayed, and at this time, the image to be displayed is directly displayed, so that a deviation exists in the position where the virtual objects in the image to be displayed are superimposed and displayed, and the virtual-real combination effect is affected.
Based on this, the electronic device can predict the position change information of itself within the preset delay time according to the motion information of itself, and it can be understood that the change of the position of the virtual object that the electronic device needs to display changes along with the position change of the electronic device. Therefore, the positional change information of the electronic device within the preset delay period can be predicted as the positional change information of each pixel within the preset delay period.
In some embodiments, the motion information may be filtered based on a preset filtering algorithm to obtain target motion information; and predicting the position change information of each pixel in a preset delay time according to the target motion information. The preset filtering algorithm includes, but is not limited to, mean filtering, median filtering, kalman filtering, and the like. Because the motion information at any moment generally has noise, the motion information is filtered before the acquired motion information is used for calculating the position change information, so that more accurate motion information with noise removed is obtained and is used as target motion information, further, the position change information predicted based on the target motion information is more accurate, and the influence caused by errors of the motion information is reduced.
Step S240: and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene.
Further, after the position change information is obtained, the actual position information of each pixel can be determined according to the current position information and the position change information of each pixel; and correcting the current position information of each pixel in the image to be displayed according to the actual position information of each pixel to obtain a target display image.
For example, when the user wears the AR glasses to face forward at the current time T1, the virtual object is located at the center of the display screen, if the electronic device moves rightward by the target distance within the preset delay time T0, correspondingly, the electronic device corrects the current position information of each pixel in the image to be displayed according to the target distance to obtain the target display image, so that when the target display image is displayed at the time t1+t0, the virtual object moves leftward by the pixel displacement distance corresponding to the target distance relative to the center of the display screen.
Based on the method, after the target display image is obtained, the target display image can be overlapped and displayed in the real scene, so that the virtual object and/or the virtual scene can be accurately overlapped and displayed without drift in the real scene, and the display effect of more real virtual-real combination is achieved.
Alternatively, the electronic device may be a display technology based on a computer display, a transmission technology based on a Video synthesis technology, or an Optical perspective technology, where the Optical perspective technology may include a transmission HMD (Optical See-through HMD) based on an Optical principle and a transmission HMD (Video See-through HMD) based on a Video synthesis technology, so as to implement the above-mentioned target display image to be displayed in a real scene in a superimposed manner, which is not limited in this embodiment.
In this embodiment, the electronic device obtains an image to be displayed at a current time, depth information of the image to be displayed, and motion information of the electronic device at the current time; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image; and displaying the target display image in a superposition manner in the real scene. That is, the electronic device combines the motion information of the electronic device and the depth information of the image to be displayed, and corrects the position of each pixel in the image to be displayed, so as to correct the drift of the image superimposed and displayed in the real scene due to the motion of the electronic device within the preset delay time, that is, compensate the real scene anchoring error due to the display delay, that is, improve the accuracy of virtual-real combination, enhance the visual immersion and visual reality of the user, and improve the use experience of the user.
Referring to fig. 3, fig. 3 is a flowchart of an image processing method according to another embodiment of the present application, which is applied to an electronic device. The image processing method provided in the embodiment of the present application will be described in detail with reference to fig. 3. The image processing method may include the steps of:
step S310: and acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment.
In this embodiment, the specific implementation of step S310 may refer to the content in the foregoing embodiment, which is not described herein.
Step S320: and determining the current position information of each pixel in the image to be displayed based on the depth information.
In some embodiments, the depth information includes a depth value corresponding to each pixel, and the current location information is second coordinate information in a display coordinate system, referring to fig. 4, step S320 may include the following steps:
step S321: and acquiring pixel coordinate information of each pixel in a pixel coordinate system as second pixel coordinate information of each pixel.
The pixel coordinate system is a coordinate system established by taking the upper left corner of the display panel of the electronic device as an origin, for example, the coordinates of the first pixel in the upper left corner are (0, 0), the coordinates of the pixels on the right side and the pixels on the lower side are sequentially accumulated, and the pixel coordinate system is a two-dimensional coordinate system. It can be understood that, after the electronic device obtains the image to be displayed, the pixel coordinate information of each pixel in the image to be displayed in the pixel coordinate system can be obtained, and for convenience of subsequent description, the pixel coordinate information is used as the second pixel coordinate information of each pixel.
Step S322: and converting the second pixel coordinate information of each pixel into vector information in the display coordinate system according to preset conversion parameters between the display coordinate system and the pixel coordinate system, so as to obtain the three-dimensional direction vector of each pixel.
The display coordinate system is established based on a display device in the electronic equipment, and can be understood as an optical machine coordinate system, and the origin of the display coordinate system is the optical center of the optical machine; preset conversion parameter between display coordinate system and pixel coordinate systemIncluding internal parameters of the optical machine, e.g. focal length f and principal point coordinates (c x ,c y )。
Based on this, first, the second pixel coordinate information of each pixel is converted into three-dimensional coordinate information, specifically, in X i Representing any one pixel, X, in an image to be displayed i The second pixel coordinate information in the pixel coordinate system is expressed as (p x p y ) X is obtained by supplementing 1 as follows i Is converted into a three-dimensional vector:
X i =(p x p y 1) T
further, in the case of the focal length f and principal point coordinates (c x ,c y ) The three-dimensional vector X corresponding to each pixel is calculated by the following formula i Converted into a three-dimensional direction vector x in a display coordinate system i
Step S323: and obtaining the product of the three-dimensional direction vector of each pixel and the depth value corresponding to each pixel as second coordinate information of each pixel in the display coordinate system.
Optionally, the depth value corresponding to each pixel is used to represent the distance between each pixel and the origin of the display coordinate system, so after the three-dimensional direction vector of each pixel in the display coordinate system is obtained, the depth value of each pixel can be combined to obtain the second coordinate information of each pixel in the display coordinate system. The display coordinate system is a three-dimensional coordinate system, the second coordinate information corresponds to a three-dimensional space coordinate, and it can be understood that combining the depth value corresponding to each pixel, each pixel point in the image to be displayed is restored to a real three-dimensional scene, namely three-dimensional scene modeling is realized.
Specifically, the second coordinate information of each pixel in the display coordinate system can be calculated by the following formula:
y i =x i *d i
wherein y is i Second coordinate information characterizing each pixel, d i Characterizing the depth value, x, corresponding to each pixel i The three-dimensional direction vector of each pixel is characterized.
Step S330: and predicting the position change information of each pixel in the preset delay time according to the motion information.
In some embodiments, the motion information includes a moving speed and an angular speed, the position change information includes angle change information and distance change information, where the motion information is obtained by converting motion data obtained by a SLAM module, the motion data obtained by the SLAM module is based on a SLAM coordinate system, the SLAM coordinate system may be understood as a three-dimensional coordinate system established by an electronic device, and since a position of the electronic device and a display screen is fixed, a conversion relationship between the SLAM coordinate system and the display coordinate system is also fixed, and the electronic device may convert the motion data in the SLAM coordinate system into the motion information in the display coordinate system according to the fixed conversion relationship. Referring to fig. 5, step S330 may include the following steps:
Step S331: and determining the distance change information according to the moving speed and the preset delay time length.
Alternatively, the distance change information may be represented by a translation vector, and a product of the moving speed and a preset delay time may be obtained as the distance change information; specifically, the translation vector of the electronic device may be calculated by the following formula:
wherein,the moving speed of the electronic equipment is represented, deltat represents a preset delay time length, and T represents a translation vector of each pixel.
Step S332: and determining the angle change information according to the angular speed and the preset delay time length.
Alternatively, the angle change information may be represented by a rotation matrix, and the angular velocity includes angular velocities of the electronic device about a horizontal axis (x-axis), a vertical axis (y-axis), and a vertical axis (z-axis) of the display coordinate system. Based on this, the angle change information can be calculated by the following formula:
wherein R represents the rotation matrix of each pixel, I is the identity matrix 3*3, omega 1 Characterizing angular velocity, ω, of an electronic device about an x-axis of a display coordinate system 2 Characterizing an angular velocity, ω, of an electronic device about a y-axis of a display coordinate system 3 The angular velocity of the electronic device about the z-axis of the display coordinate system is characterized, and Δt characterizes a preset delay period.
Step S340: and determining the actual position information of each pixel according to the current position information, the angle change information and the distance change information of each pixel.
Based on this, after the current position information, the angle change information, and the distance change information of each pixel are acquired, the actual position information of each pixel after the display delay can be calculated. The current position information of each pixel position is rotated according to the angle change information, and meanwhile, the current position information of each pixel is moved according to the distance change information, so that the actual position information of each pixel can be obtained. Specifically, the actual position information of each pixel can be calculated by the following formula:
y′ i =R·y i +T
wherein y' i Representing the actual position information of each pixel, R representing the rotation matrix of each pixel, T representing the translation vector of each pixel, y i And second coordinate information (i.e., current position information) characterizing each pixel.
Step S350: and moving each pixel in the image to be displayed from the pixel position corresponding to the current position information to the pixel position corresponding to the actual position information to obtain the target display image.
In some embodiments, the actual position information is the first coordinate information in the display coordinate system, referring to fig. 6, step S350 may include the following steps:
Step S351: and acquiring pixel coordinate information of each pixel in the image to be displayed in a pixel coordinate system, and taking the pixel coordinate information as second pixel coordinate information of each pixel.
In this embodiment, the specific implementation of step S351 may refer to the foregoing, and this embodiment is not repeated here.
Step S352: and converting the first coordinate information of each pixel in the display coordinate system into the pixel coordinate information in the pixel coordinate system according to a preset conversion parameter between the display coordinate system and the pixel coordinate system, and taking the pixel coordinate information as the first pixel coordinate information of each pixel.
It will be appreciated that since the actual position information (i.e., the first coordinate information) is three-dimensional coordinate information of each pixel in the display coordinate system, correction of the image is performed based on the two-dimensional plane. Therefore, after the electronic device obtains the actual position information of each pixel, the electronic device may convert the first coordinate information of each pixel in the display coordinate system into the pixel coordinate information in the pixel coordinate system by using the preset conversion parameter between the display coordinate system and the pixel coordinate system, where the first pixel coordinate information is the position where each pixel should be actually displayed in the image due to the display delay. Specifically, the first coordinate information of each pixel in the display coordinate system may be converted into first pixel coordinate information in the pixel coordinate system by the following formula:
Wherein X is i ' is the actual pixel position of each pixel in the pixel coordinate system, s represents the relative X i The' third component is normalized, i.e. by:
X i ′→(X′ i,1 /X i,3 ' X′ i,2 /X i,3 ′ 1) T
Further, obtaining X after normalization i ' as first pixel coordinate information of each pixel.
Step S353: and adjusting the pixel value of the pixel position corresponding to the first pixel coordinate information corresponding to each pixel according to the pixel value of the pixel position corresponding to the second pixel coordinate information and a preset interpolation method, so as to obtain the target display image, wherein the target display image is used for being displayed in a real scene in a superposition way.
In the present embodiment, after the first pixel coordinate information (pixel coordinate information of the actual pixel position) and the second pixel coordinate information (pixel coordinate information of the current pixel position) of each pixel in the pixel coordinate system are acquired, the pixel at the pixel position corresponding to the second pixel coordinate information may be moved to the pixel position corresponding to the first pixel coordinate information for display. Wherein, the pixel movement can be equivalently realized by changing the pixel value of each pixel; specifically, according to the pixel value of each pixel at the pixel position corresponding to the second pixel coordinate information and a preset interpolation method, the pixel value of the pixel position corresponding to the first pixel coordinate information corresponding to each pixel is adjusted, and the target display image is obtained. The preset interpolation method includes, but is not limited to, nearest neighbor interpolation or bilinear difference value algorithm, which is not limited in this embodiment.
In the embodiment, the translation and rotation of the electronic equipment are compensated by combining the moving speed, the angular speed and the depth information of the image to be displayed of the electronic equipment, so that the anchoring deviation of the superimposed virtual image caused by the movement of the electronic equipment in the display delay time is effectively compensated, and the accuracy of virtual-real combination is improved; and by utilizing the motion data provided by the SLAM technology and the depth map information of the virtual object, the electronic equipment can achieve better delay compensation and anchoring effects, the performance of the display module is improved, and finally the user experience is improved.
Referring to fig. 7, fig. 7 is a flowchart of an image processing method according to another embodiment of the present application, which is applied to an electronic device. The image processing method provided in the embodiment of the present application will be described in detail below with reference to fig. 7. The image processing method may include the steps of:
step S410: and acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment.
In this embodiment, the specific implementation of step S410 may refer to the content in the foregoing embodiment, which is not described herein.
Step S420: and judging whether the motion information meets a preset motion compensation condition or not.
In some embodiments, the motion information includes a movement speed and an angular speed of the electronic device, and determining whether the movement speed reaches a preset movement speed threshold, and whether the angular speed reaches a preset angular speed threshold; if the moving speed reaches the preset moving speed threshold value and the angular speed reaches the preset angular speed threshold value, judging that the motion information meets the preset motion compensation condition; and if the moving speed does not reach the preset moving speed threshold value and/or the angular speed does not reach the preset angular speed threshold value, judging that the motion information does not meet the preset motion compensation condition. The preset angular velocity threshold and the preset moving velocity threshold may be preset values, or of course, the values may be adjusted according to requirements for display accuracy in an actual application scenario, which is not limited in this embodiment. In this way, before the image to be displayed is subjected to motion compensation based on the motion information, whether the movement trend and/or the rotation trend of the electronic equipment meet the preset motion compensation condition is judged by judging whether the movement speed and/or the angular speed of the electronic equipment meet the respective corresponding threshold values; the problems of computing resource waste and the like caused by correcting an image to be displayed when the electronic equipment does not have a moving trend and/or a rotating trend are avoided; meanwhile, the problems of frequent image correction and the like caused by jitter errors of motion information can be avoided, and the stabilizing effect of virtual-real combination is ensured.
In other embodiments, the image to be displayed may be understood as an image corresponding to the virtual object to be displayed, and the distance between the virtual object and the electronic device may be determined according to the depth information of the virtual object, and whether the distance is smaller than the target distance may be determined, if so, the electronic device may be represented as being closer to the virtual object, and at this time, the moving speed and the angular speed of the electronic device may not be ignored, and at this time, whether the moving speed reaches the first moving speed threshold and whether the angular speed reaches the first angular speed threshold may be determined; if the moving speed reaches a first moving speed threshold value and the angular speed reaches a first angular speed threshold value, judging that the motion information meets the preset motion compensation condition; and if the moving speed does not reach the first moving speed threshold value and/or the angular speed does not reach the first angular speed threshold value, judging that the motion information does not meet the preset motion compensation condition. If the movement speed is not less than the second movement speed threshold, the electronic equipment is characterized as being far away from the virtual object, and at the moment, the electronic equipment has tiny movement speed and the angular speed is negligible, so that whether the movement speed reaches the second movement speed threshold or not and whether the angular speed reaches the second angular speed threshold or not can be judged; if the moving speed reaches a second moving speed threshold value and the angular speed reaches a second angular speed threshold value, judging that the motion information meets the preset motion compensation condition; and if the moving speed does not reach the second moving speed threshold value and/or the angular speed does not reach the second angular speed threshold value, judging that the motion information does not meet the preset motion compensation condition. Wherein the second movement speed threshold is greater than the first movement speed threshold and the second angular speed threshold is greater than the first angular speed threshold. Therefore, according to the distance between the electronic equipment and the virtual object, different judging conditions are set to judge whether the electronic equipment accords with the preset motion compensation conditions, so that the correction precision of the image to be displayed can be improved, and the accuracy of virtual-real combination can be further improved.
Step S430: if yes, determining the current position information of each pixel in the image to be displayed based on the depth information.
Step S440: and predicting the position change information of each pixel in the preset delay time according to the motion information.
Step S450: and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene.
In this embodiment, the contents of the steps S430 to S450 are executed only when the motion information satisfies the preset motion compensation condition, and the specific implementation may refer to the contents of the foregoing embodiment, which is not limited in this embodiment.
Step S460: and if the image to be displayed does not meet the requirement, displaying the image to be displayed in a superimposed mode on the real scene.
It can be understood that when the motion information does not meet the preset motion compensation condition, the motion change caused by the motion speed and the angular speed representing the current moment of the electronic device within the preset delay time is very tiny, i.e. negligible, which is equivalent to that the anchoring deviation of the virtual object caused by the motion of the electronic device is very small and can not be perceived by human eyes. Therefore, the image to be displayed does not need to be corrected at this time, and the image to be displayed can be directly displayed in a superimposed manner in a real scene.
In this embodiment, before each pixel in an image to be displayed is corrected according to the position change information, whether an anchoring deviation caused by movement of the electronic device within a preset delay time period meets a preset motion compensation condition is judged according to a moving speed and an angular speed of the electronic device, and if so, the position of each pixel in the image to be displayed is corrected; and if the display scene is not satisfied, displaying the image to be displayed directly in a superimposed manner. Therefore, the anchoring deviation of virtual-real combination caused by the movement of the electronic equipment within the preset delay time can be effectively and accurately compensated, and meanwhile, the calculation resources can be saved.
Referring to fig. 8, a block diagram of an image display apparatus 500 according to an embodiment of the present application is shown and applied to an electronic device. The apparatus 500 may include: an information acquisition unit 510, a position determination unit 520, a prediction unit 530, and an image correction unit 540.
The information obtaining unit 510 is configured to obtain an image to be displayed at a current time, depth information of the image to be displayed, and motion information of the electronic device at the current time.
The pixel position determining unit 520 is configured to determine current position information of each pixel in the image to be displayed based on the depth information.
The prediction unit 530 is configured to predict, according to the motion information, the position change information of each pixel within a preset delay period.
The image correction unit 540 is configured to correct the current position information of each pixel in the image to be displayed based on the position change information, so as to obtain a target display image, where the target display image is used for being displayed in a superimposed manner in a real scene.
In some embodiments, the image display apparatus 500 may further include: and the judging module and the display module. The judging module may be configured to judge whether the motion information meets a preset motion compensation condition after obtaining the motion information of the electronic device at the current moment. The position determining module 520 may be configured to determine, based on the depth information, current position information of each pixel in the image to be displayed when the motion information satisfies a preset motion compensation condition. The display module may be configured to display the image to be displayed in a superimposed manner on the real scene when the motion information does not satisfy a preset motion compensation condition.
In this manner, the motion information includes a moving speed and an angular speed, and the determining module may specifically be configured to: judging whether the moving speed reaches a preset moving speed threshold value or not and whether the angular speed reaches a preset angular speed threshold value or not; if the moving speed reaches the preset moving speed threshold value and the angular speed reaches the preset angular speed threshold value, judging that the motion information meets the preset motion compensation condition; and if the moving speed does not reach the preset moving speed threshold value and/or the angular speed does not reach the preset angular speed threshold value, judging that the motion information does not meet the preset motion compensation condition.
In some embodiments, the motion information includes a moving speed and an angular speed, the position change information includes angle change information and distance change information, and the prediction module 530 may include: distance prediction unit and angle prediction unit. The distance prediction unit may be configured to determine the distance change information according to the moving speed and the preset delay duration. The angle prediction unit may be configured to determine the angle change information according to the angular velocity and the preset delay time period.
In some embodiments, the image correction unit 540 may include: an actual position determination subunit and an image correction subunit. Wherein the actual position determining subunit may be configured to determine the actual position information of each pixel according to the current position information, the angle change information, and the distance change information of each pixel. The image correction subunit may be configured to move each pixel in the image to be displayed from a pixel position corresponding to the current position information to a pixel position corresponding to the actual position information, so as to obtain the target display image.
In this manner, the actual position information is first coordinate information in a display coordinate system, where the display coordinate system is established based on a display device in the electronic device, and the image correction unit 540 may specifically be configured to: acquiring pixel coordinate information of each pixel in the image to be displayed in a pixel coordinate system, and taking the pixel coordinate information as second pixel coordinate information of each pixel; converting the first coordinate information of each pixel in the display coordinate system into pixel coordinate information in the pixel coordinate system according to preset conversion parameters between the display coordinate system and the pixel coordinate system, and using the pixel coordinate information as the first pixel coordinate information of each pixel; and adjusting the pixel value of the pixel position corresponding to the first pixel coordinate information corresponding to each pixel according to the pixel value of each pixel at the pixel position corresponding to the second pixel coordinate information and a preset interpolation method to obtain the target display image.
In some embodiments, the depth information includes a depth value corresponding to each pixel, the current location information is second coordinate information in a display coordinate system, the display coordinate system is established based on a display device in the electronic device, and the pixel location determining unit 520 may include: a pixel coordinate acquisition subunit, a vector acquisition subunit, and a coordinate acquisition subunit. The pixel coordinate acquiring subunit may be configured to acquire, as the second pixel coordinate information of each pixel, pixel coordinate information of each pixel in the pixel coordinate system. The vector obtaining subunit may be configured to convert, according to a preset conversion parameter between the display coordinate system and the pixel coordinate system, the second pixel coordinate information of each pixel into vector information in the display coordinate system, so as to obtain a three-dimensional direction vector of each pixel. The coordinate acquisition subunit may be configured to acquire, as the second coordinate information of each pixel in the display coordinate system, a product of the three-dimensional direction vector of the each pixel and the depth value corresponding to the each pixel.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In several embodiments provided herein, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
In summary, in the solution provided in the embodiment of the present application, the electronic device obtains the image to be displayed at the current time, the depth information of the image to be displayed, and the motion information of the electronic device at the current time; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene. That is, the electronic device combines the motion information of the electronic device and the depth information of the image to be displayed, and corrects the position of each pixel in the image to be displayed, so as to correct the drift of the image superimposed and displayed in the real scene due to the motion of the electronic device within the preset delay time, that is, compensate the real scene anchoring error due to the display delay, that is, improve the accuracy of virtual-real combination, enhance the visual immersion and visual reality of the user, and improve the use experience of the user.
An electronic device provided in the present application will be described with reference to fig. 9.
Referring to fig. 9, fig. 9 shows a block diagram of an electronic device 600 according to an embodiment of the present application, where the method according to the embodiment of the present application may be performed by the electronic device 600. The electronic device 600 may be an AR head-mounted display device, an MR head-mounted display device, a smart phone, a tablet computer, a notebook computer, or the like capable of running an application program.
The electronic device 600 in embodiments of the present application may include one or more of the following components: a processor 601, a memory 602, and one or more application programs, wherein the one or more application programs may be stored in the memory 602 and configured to be executed by the one or more processors 601, the one or more program configured to perform the method as described in the foregoing method embodiments.
Processor 601 may include one or more processing cores. The processor 601 utilizes various interfaces and lines to connect various portions of the overall electronic device 600, perform various functions of the electronic device 600 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 602, and invoking data stored in the memory 602. Alternatively, the processor 601 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 601 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may also be integrated into the processor 601 and implemented solely by a communication chip.
The Memory 602 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (rom). Memory 602 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 602 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the electronic device 600 in use (such as the various correspondences described above), and so forth.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided herein, the illustrated or discussed coupling or direct coupling or communication connection of the modules to each other may be through some interfaces, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other forms.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 10, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 700 has stored therein program code which may be invoked by a processor to perform the methods described in the method embodiments above.
The computer readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 700 comprises a non-transitory computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 700 has memory space for program code 710 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 710 may be compressed, for example, in a suitable form.
In some embodiments, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, one of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. An image processing method, applied to an electronic device, comprising:
acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment;
Determining current position information of each pixel in the image to be displayed based on the depth information;
predicting the position change information of each pixel in a preset delay time according to the motion information;
and correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, wherein the target display image is used for being displayed in a superimposed manner in a real scene.
2. The method of claim 1, wherein after obtaining motion information for a current time of the electronic device, the method further comprises:
judging whether the motion information meets a preset motion compensation condition or not;
if yes, executing the step of determining the current position information of each pixel in the image to be displayed based on the depth information;
and if the image to be displayed does not meet the requirement, displaying the image to be displayed in a superimposed mode on the real scene.
3. The method according to claim 2, wherein the motion information includes a moving speed and an angular speed, and the determining whether the motion information satisfies a preset motion compensation condition includes:
judging whether the moving speed reaches a preset moving speed threshold value or not and whether the angular speed reaches a preset angular speed threshold value or not;
If the moving speed reaches the preset moving speed threshold value and the angular speed reaches the preset angular speed threshold value, judging that the motion information meets the preset motion compensation condition;
and if the moving speed does not reach the preset moving speed threshold value and/or the angular speed does not reach the preset angular speed threshold value, judging that the motion information does not meet the preset motion compensation condition.
4. The method according to claim 1, wherein the motion information includes a moving speed and an angular speed, the position change information includes angle change information and distance change information, and the predicting the position change information of each pixel within a preset delay period according to the motion information includes:
determining the distance change information according to the moving speed and the preset delay time length;
and determining the angle change information according to the angular speed and the preset delay time length.
5. The method according to claim 4, wherein correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain the target display image includes:
Determining actual position information of each pixel according to the current position information, the angle change information and the distance change information of each pixel;
and moving each pixel in the image to be displayed from the pixel position corresponding to the current position information to the pixel position corresponding to the actual position information to obtain the target display image.
6. The method according to claim 5, wherein the actual position information is first coordinate information in a display coordinate system, the display coordinate system is established based on a display device in the electronic device, the moving each pixel in the image to be displayed from a pixel position corresponding to the current position information to a pixel position corresponding to the actual position information, and obtaining the target display image includes:
acquiring pixel coordinate information of each pixel in the image to be displayed in a pixel coordinate system, and taking the pixel coordinate information as second pixel coordinate information of each pixel;
converting the first coordinate information of each pixel in the display coordinate system into pixel coordinate information in the pixel coordinate system according to preset conversion parameters between the display coordinate system and the pixel coordinate system, and using the pixel coordinate information as the first pixel coordinate information of each pixel;
And adjusting the pixel value of the pixel position corresponding to the first pixel coordinate information corresponding to each pixel according to the pixel value of each pixel at the pixel position corresponding to the second pixel coordinate information and a preset interpolation method to obtain the target display image.
7. The method according to any one of claims 1-6, wherein the depth information includes a depth value corresponding to each pixel, the current position information is second coordinate information in a display coordinate system, the display coordinate system is established based on a display device in the electronic device, and determining, based on the depth information, current position information of each pixel in the image to be displayed includes:
acquiring pixel coordinate information of each pixel in a pixel coordinate system as second pixel coordinate information of each pixel;
converting the second pixel coordinate information of each pixel into vector information in the display coordinate system according to preset conversion parameters between the display coordinate system and the pixel coordinate system, so as to obtain a three-dimensional direction vector of each pixel;
and obtaining the product of the three-dimensional direction vector of each pixel and the depth value corresponding to each pixel as second coordinate information of each pixel in the display coordinate system.
8. An image processing apparatus, characterized by being applied to an electronic device, comprising:
the information acquisition unit is used for acquiring an image to be displayed at the current moment, depth information of the image to be displayed and motion information of the electronic equipment at the current moment;
a pixel position determining unit, configured to determine current position information of each pixel in the image to be displayed based on the depth information;
the prediction unit is used for predicting the position change information of each pixel in the preset delay time according to the motion information;
and the image correction unit is used for correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image, and the target display image is used for being displayed in a superimposed manner in a real scene.
9. An image processing system is characterized by being applied to electronic equipment, and comprises a motion information acquisition module, an image source module, an image processing module and a display module;
the motion information acquisition module is used for acquiring motion information of the electronic equipment at the current moment and transmitting the motion information to the image processing module;
The image source module is used for transmitting an image to be displayed at the current moment and depth information of the image to be displayed to the image processing module;
the image processing module is used for receiving the motion information of the electronic equipment at the current moment, which is transmitted by the motion information acquisition module, and the image to be displayed and the depth information of the image to be displayed, which are transmitted by the image source module; determining current position information of each pixel in the image to be displayed based on the depth information; predicting the position change information of each pixel in a preset delay time according to the motion information; correcting the current position information of each pixel in the image to be displayed based on the position change information to obtain a target display image; transmitting the target display image to the display module;
and the display module receives the target display image and displays the target display image in a superposition manner in a real scene.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
11. A computer readable storage medium having stored therein program code which is callable by a processor to perform the method according to any one of claims 1 to 7.
CN202210899214.1A 2022-07-28 2022-07-28 Image processing method, device, system, electronic equipment and storage medium Pending CN117523139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210899214.1A CN117523139A (en) 2022-07-28 2022-07-28 Image processing method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210899214.1A CN117523139A (en) 2022-07-28 2022-07-28 Image processing method, device, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117523139A true CN117523139A (en) 2024-02-06

Family

ID=89751796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210899214.1A Pending CN117523139A (en) 2022-07-28 2022-07-28 Image processing method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117523139A (en)

Similar Documents

Publication Publication Date Title
US11210993B2 (en) Optimized display image rendering
US10733789B2 (en) Reduced artifacts in graphics processing systems
US20170185144A1 (en) Virtual reality system with control command gestures
WO2017183346A1 (en) Information processing device, information processing method, and program
US10999412B2 (en) Sharing mediated reality content
US10818078B2 (en) Reconstruction and detection of occluded portions of 3D human body model using depth data from single viewpoint
US11662580B2 (en) Image display method, apparatus, and system to reduce display latency
US11158108B2 (en) Systems and methods for providing a mixed-reality pass-through experience
CN108153417B (en) Picture compensation method and head-mounted display device adopting same
US20210377515A1 (en) Information processing device, information processing method, and program
CN111833403A (en) Method and apparatus for spatial localization
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
WO2020231560A1 (en) Capturing subject representation within an augmented reality environment
CN111179438A (en) AR model dynamic fixing method and device, electronic equipment and storage medium
WO2017212999A1 (en) Video generation device, video generation method, and video generation program
CN117523139A (en) Image processing method, device, system, electronic equipment and storage medium
US20170154466A1 (en) Interactively augmented reality enable system
CN112308981A (en) Image processing method, image processing device, electronic equipment and storage medium
US20220335638A1 (en) Depth estimation using a neural network
TW201915543A (en) Head mounted display and control method thereof
US20230177778A1 (en) Augmented reality using a split architecture
CN116843820A (en) Rendering method, rendering device, electronic equipment and storage medium
WO2024020258A1 (en) Late stage occlusion based rendering for extended reality (xr)
CN115830203A (en) Distributed rendering method, apparatus, device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication