WO2022252924A1 - Procédé de transmission et d'affichage d'image et dispositif et système associés - Google Patents

Procédé de transmission et d'affichage d'image et dispositif et système associés Download PDF

Info

Publication number
WO2022252924A1
WO2022252924A1 PCT/CN2022/091722 CN2022091722W WO2022252924A1 WO 2022252924 A1 WO2022252924 A1 WO 2022252924A1 CN 2022091722 W CN2022091722 W CN 2022091722W WO 2022252924 A1 WO2022252924 A1 WO 2022252924A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
signal
display device
transmits
Prior art date
Application number
PCT/CN2022/091722
Other languages
English (en)
Chinese (zh)
Inventor
单双
沈钢
毛春静
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022252924A1 publication Critical patent/WO2022252924A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present application relates to the field of terminal technology, and in particular to an image transmission and display method, related equipment and system.
  • AR devices can superimpose and display virtual images for users while viewing real-world scenes, and users can also interact with virtual images to achieve the effect of augmented reality.
  • a VR device can simulate a three-dimensional (3D) virtual world scene, and can also provide a visual, auditory, tactile or other sensory simulation experience to make users feel as if they are in the scene.
  • the user can also interact with the simulated virtual world scene.
  • MR combines AR and VR, which can provide users with a vision after merging the real and virtual worlds.
  • a head-mounted display device referred to as a head-mounted display device, is a display device worn on a user's head, which can provide a new visual environment for the user.
  • Head-mounted display devices can present AR, VR, MR and other display effects to users by emitting optical signals.
  • An important factor affecting the user experience of AR/VR or MR head-mounted display devices is the motion to photos (MTP) latency.
  • MTP latency refers to the sum of the delay time generated in a cycle from the user's head movement to the display of the corresponding new image on the device screen.
  • MTP delay is a physiological reaction of the human body. It means that when the user experiences AR/VR or MR, because the picture is dynamic, the brain thinks that the change of the picture is due to some kind of movement of the body, but the control of body balance and coordination The ear vestibular organ does not feed back the signal of body movement to the brain, which leads to incoordination between the visual information and movement felt by the brain, which leads to dizziness and vomiting in the user, which is called motion sickness.
  • the current industry-recognized MTP delay of less than 20 milliseconds (ms) can greatly reduce the occurrence of motion sickness.
  • This application provides an image transmission and display method and related equipment, which can optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing image transmission and display.
  • the waiting time in the process further reduces the delay time of image transmission and display.
  • the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal.
  • the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal.
  • the first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
  • the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1
  • the time interval between the first display signal and the second display signal is T2
  • T1 is equal to T2 .
  • the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1 .
  • the display device in response to the first vertical synchronization signal and the second vertical synchronization signal, starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
  • the first image signal is an image signal of a rendered first image
  • the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
  • the first device before the first device transmits the first image signal to a display apparatus, the first device renders the first image.
  • the first device further includes an image rendering module, the image rendering module is configured to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback The signal is used to indicate the vertical synchronization information of the display device.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
  • the first device before the first device transmits the first image signal to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
  • both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the first device in response to the second display signal, displays a third image, where the third image includes the first image and the second image.
  • the first device in response to the first image signal, displays the first image. In response to the second image signal, the first device displays the second image.
  • the display device further includes a frame buffer, where the frame buffer is used to store pixel data of the first image and the second image.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image.
  • the first device before the first device transmits the first image to a display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
  • the first device before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device transmits the first image to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
  • the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the present application provides an image transmission and display method, and the method may include: establishing a connection between a first device and a second device.
  • the second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection.
  • the first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device.
  • the display device displays the third image, wherein the third image includes the first image and the second image.
  • the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then processes the multiple parts for display in parallel, so that the display channel It is optimized to reduce the waiting time during image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
  • the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the sixth time is the time taken by the second device to transmit the second image to the first device. The fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device.
  • the second device before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the second device when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the method further includes: after the second device transmits the first image to the first device through the connection, and before the first device transmits the first image to the display device, the second A device renders a first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image.
  • the first device when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time.
  • the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the embodiment of the present application provides an electronic device, and the electronic device may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs.
  • Computer-executable instructions are stored in the memory, and the display device is used to display images.
  • the processor executes the instructions, the electronic device can realize any function of the first device in the first aspect or the second aspect.
  • an embodiment of the present application provides a computer storage medium, in which a computer program is stored, and the computer program includes executable instructions, and when executed by a processor, the executable instruction causes the processor to perform the following steps: Operations corresponding to the method provided in the first aspect or the second aspect.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute any possible implementation manner in the first aspect or the second aspect.
  • the embodiment of the present application provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions so that the electronic device implements the first aspect or Any possible implementation in the second aspect.
  • the embodiment of the present application provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the first aspect or the second aspect .
  • Implementing the above aspects provided by the embodiments of the present application can reduce the delay time in the process of image transmission and display, and images can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the above-mentioned method provided by this application can be applied to more scenarios, such as various image transmission and display scenarios such as electronic equipment projecting and displaying images, vehicle-mounted equipment displaying images, etc., through fragmented transmission, rendering, and display
  • the parallel processing method speeds up the end-to-end display process, reduces delay, and improves the user's viewing experience.
  • FIG. 1 is a schematic structural diagram of a head-mounted display device
  • FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device
  • FIG. 3 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the software structure of the electronic device provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process module provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image display process
  • Fig. 8 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of image division provided by the embodiment of the present application.
  • Fig. 12 is an image slice sending display diagram provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a rendering process provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 19 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • FIG. 20 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • FIG. 21 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • Fig. 22 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 23 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 24 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 25 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • FIG. 26 is a schematic diagram of a process of an image transmission and display method provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • plurality refers to two or more than two.
  • the electronic device involved in the embodiment of the present application may be a head-mounted display device, worn on the user's head, and the head-mounted display device may realize display effects such as AR, VR, and MR.
  • the appearance form of the head-mounted display device may be glasses, a helmet, an eye mask, etc., which is not limited in this embodiment of the present application.
  • the electronic device involved in the embodiment of the present application may also be other devices including a display screen, such as a mobile phone, a personal computer (personal computer, PC), a tablet computer (portable android device, PAD) , a smart TV, or a vehicle-mounted device, etc., the embodiment of the present application does not limit the type of the electronic device.
  • the virtual object displayed by the electronic device can interact with the user.
  • the user can directly interact with the virtual object displayed on the electronic device through somatosensory interaction methods such as hand/arm movement, head movement, and eyeball rotation.
  • the electronic device can be used together with the handheld device, and the user can interact with the virtual object displayed on the electronic device by manipulating the handheld device.
  • the handheld device may be, for example, a handle, controller, gyro mouse, stylus, or other handheld computing device.
  • Handheld devices can be configured with various sensors such as acceleration sensors, gyroscope sensors, magnetic sensors, etc., which can be used to detect and track their own motion.
  • Handheld devices can communicate with electronic devices through short-distance transmission technologies such as wireless fidelity (Wi-Fi), Bluetooth (bluetooth), near field communication (NFC), ZigBee, etc. (universal serial bus, USB) interface or custom interface and other wired connections for communication.
  • Wi-Fi wireless fidelity
  • Bluetooth blue
  • NFC near field communication
  • ZigBee ZigBee
  • USB universal serial bus
  • FIG. 1 shows a schematic structural diagram of a head-mounted display device 200 .
  • the head-mounted display device 200 may include a left lens 201, a right lens 202, a left display 203, a right display 204, a left camera 205, a right camera 206 and an inertial measurement unit (inertial measurement unit, IMU) Some or all of 207 etc.
  • IMU inertial measurement unit
  • both the left lens 201 and the right lens 202 may be convex lenses, Fresnel lenses or one or more transparent lenses of other types.
  • the image on the display screen can be brought closer to the position of the user's retina, so that the user's eyes can clearly see the image on the display screen that is almost attached to the front of the user's eyes.
  • the left display screen 203 and the right display screen 204 are used to display images of the real world or virtual world or a combination of virtual and real worlds.
  • the left camera 205 and the right camera 206 are used to collect real-world images.
  • IMU207 is a sensor used to detect and measure acceleration and rotational motion, which can include accelerometers, angular velocity meters (or gyroscopes), etc.
  • the accelerometer is a sensor that is sensitive to axial acceleration and converts it into a usable output signal.
  • the gyroscope is a sensor that is sensitive to motion The sensor of the angular velocity of the body relative to the inertial space.
  • FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device 200 according to an embodiment of the present application.
  • the user's eyes can see the image 210 presented on the display screen of the head-mounted display device 200 .
  • the image 210 presented on the display screen of the head-mounted display device 200 is a visualized virtual environment, which may be a virtual world image, a real world image, or a combination of virtual and real world images.
  • the two images will have positional deviations, so the user There is a difference in the image information obtained by the left and right eyes, that is, parallax.
  • the user's brain will subconsciously calculate the distance between the object and the body according to the parallax, so that the image viewed by the user has a sense of three-dimensionality and depth of field, and guides the user to have a feeling of being in a virtual environment.
  • the feeling in the environment forms a hyper-realistic immersive experience like being there.
  • the provision of a visual virtual environment by an electronic device means that the electronic device renders and displays a virtual image composed of one or more virtual objects, or an image combining real objects and virtual objects by using display technologies such as AR/VR or MR.
  • the virtual object may be generated by the electronic device itself using computer graphics technology, computer simulation technology, etc., or may be generated by other electronic devices using computer graphics technology, computer simulation technology, etc. and sent to the electronic device.
  • Other electronic devices may be servers, and may also be mobile phones, computers, etc. connected or paired with the electronic devices.
  • Virtual objects may also be referred to as virtual images or virtual elements. Virtual objects can be two-dimensional or three-dimensional. Virtual objects are fake objects that do not exist in the physical world.
  • the virtual object may be a virtual object imitating an object existing in a real physical world, thereby bringing an immersive experience to the user.
  • Virtual objects may include virtual animals, virtual characters, virtual trees, virtual buildings, virtual labels, icons, pictures or videos and so on.
  • the corresponding real object refers to an object existing in a real physical environment or a physical space where the user and the electronic device are currently located.
  • Real objects may include animals, people, trees, buildings, and the like.
  • its product form can be an all-in-one machine or a split machine.
  • An all-in-one machine refers to a device with an independent processor, which has independent computing, input and output functions, without the constraints of connections, and has a higher degree of freedom.
  • the split machine means that the display device is separated from the host computer, the display device is mainly used for displaying images, and the host computer is mainly used for calculation processing. Since the host processing system of the split machine is separated from the head-mounted display device, the host can adopt a higher-performance processor and heat dissipation system. Therefore, the advantage of the split machine is that the function allocation between the display device and the host is more reasonable, the processing performance is stronger, and the resources are more abundant. However, the split machine needs to consider cross-device and cross-platform compatibility, such as hardware platform, software system, operating system, application software, etc.
  • the hardware structure of the electronic device 100 provided in the embodiment of the present application is described as an example below.
  • the electronic device 100 provided in the embodiment of the present application may be the aforementioned head-mounted display device 200 , or may be the host 310 or other electronic devices described later.
  • the electronic device 100 may include, but is not limited to, a mobile phone, a desktop computer, a notebook computer, a tablet computer, a smart screen (smart TV), a desktop computer, a laptop computer, a handheld computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC) ), netbooks, and cellular phones, personal digital assistants (PDAs), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, Wearable devices, in-vehicle devices, game consoles, smart home devices, Internet of Things or Internet of Vehicles devices, etc.
  • PDAs personal digital assistants
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • Wearable devices wearable devices
  • in-vehicle devices game consoles
  • smart home devices Internet of Things or Internet of Vehicles devices, etc.
  • FIG. 3 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 is a head-mounted display device 200 as an example for illustration.
  • the electronic device 100 is other devices such as mobile phones, part of the hardware structure may be added or reduced.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, a mobile communication module 150, Wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display device 194, and eye tracking module 195 Wait.
  • the sensor module 180 can be used to acquire the user's posture, and can include a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E, and the like.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 is generally used to control the overall operation of the electronic device 100, and may include one or more processing units, for example: the processor 110 may include a central processing unit (central processing unit, CPU), an application processor (application processor, AP) , modem processor, graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU), controller, memory, video codec , digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM
  • the processor 110 may render different objects based on different frame rates, for example, use a high frame rate for rendering for nearby objects, and use a low frame rate for rendering for distant objects.
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to the touch sensor 180E, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180E through the I2C interface, so that the processor 110 and the touch sensor 180E communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of listening to audio through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of listening to audio through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing audio through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display device 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display device 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display device 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones, computers, VR/AR or MR devices, etc.
  • the USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
  • DP high-speed display port
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display device 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the electronic device 100 may include a wireless communication function.
  • the head-mounted display device 200 may receive rendered images from other electronic devices (such as a VR host or a VR server) for display, or receive unrendered images and then the processor 110 The image is rendered and displayed.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module 150, a wireless communication module 160, a modem processor (not shown), and a baseband processor (not shown).
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the electronic device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antennas can be multiplexed as diversity antennas for wireless LANs. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide the second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation network applied on the electronic device 100. (5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • At least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display device 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
  • the antenna of the electronic device 100 is coupled to the mobile communication module 150 and the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display device 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display device 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display device 194 is used to display images, videos and the like. Wherein, the display device 194 may be used to present one or more virtual objects, so that the electronic device 100 provides a virtual reality scene for the user. In some embodiments, the electronic device 100 may include 1 or N display devices 194 , where N is a positive integer greater than 1.
  • the manner in which the display device 194 presents the virtual object may include one or more of the following:
  • the display device 194 may include a display screen, and the display screen may include a display panel.
  • the display panel can be used to display physical objects and/or virtual objects, so as to present a three-dimensional virtual environment for users. The user can see the virtual object on the display panel and experience the virtual reality scene.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light emitting diodes quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the display device 194 may include optics for projecting an optical signal (eg, a light beam) directly onto the user's retina.
  • the display device 194 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as reflective mirrors, transmissive mirrors, or optical waveguides, and users can directly use the optical signals projected by the optical device. See virtual objects, feel the three-dimensional virtual environment, realize virtual interactive experience, or realize the interactive experience combining virtual and reality.
  • the optical device may be a pico projector or the like.
  • the number of display devices 194 in the electronic device may be two, corresponding to the two eyes of the user respectively.
  • the content displayed on the two display devices can be displayed independently. Images with parallax can be displayed on the two display devices to improve the stereoscopic effect of the images.
  • the number of display device 194 in the electronic device may also be one, and the user's two eyes watch the same image.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display device 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may include, but not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (dynamic vision sensor, DVS) camera, and the like.
  • camera 193 may be a depth camera.
  • the depth camera can collect the spatial information of the real environment.
  • the camera 193 may capture an image including a real object
  • the processor 110 may fuse the image of the real object captured by the camera 193 with the image of the virtual object, and display the fused image through the display device 194 .
  • the camera 193 may collect hand images or body images of the user, and the processor 110 may be configured to analyze the images collected by the camera 193 to recognize hand motions or body motions input by the user.
  • the camera 193 can be used in conjunction with an infrared device (such as an infrared emitter) to detect the user's eye movements, such as eye gaze direction, blinking operation, gaze operation, etc., thereby realizing eye tracking (eye tracking).
  • an infrared device such as an infrared emitter
  • the electronic device 100 may further include an eye-tracking module 195, which is used to track the movement of the human eye, and then determine the gaze point of the human eye.
  • an eye-tracking module 195 which is used to track the movement of the human eye, and then determine the gaze point of the human eye.
  • the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may be used to store application programs of one or more applications, and the application programs include instructions.
  • the application program When the application program is executed by the processor 110, the electronic device 100 is made to generate content for presentation to the user.
  • the application may include an application for managing the head-mounted display device 200, a game application, a meeting application, a video application, a desktop application or other applications, and the like.
  • the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
  • RAM random access memory
  • NVM non-volatile memory
  • Random access memory has the characteristics of fast read/write speed and volatility. Volatile means that once the power is turned off, the data stored in RAM will disappear. Usually, the static power consumption of the random access memory is extremely low, and the operating power consumption is relatively large.
  • the data in RAM is memory data, which can be read at any time and disappears when the power is turned off.
  • Non-volatile memory has the characteristics of non-volatility and stable data storage.
  • Non-volatile means that the stored data will not disappear after the power is turned off, and the data can be saved for a long time without power.
  • the data in NVM includes application data, which can be stored stably in NVM for a long time.
  • Application data refers to the content written during the running of an application or service process, such as photos or videos obtained by a camera application, text edited by a user in a document application, and so on.
  • Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.
  • SRAM static random-access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous Dynamic random access memory double data rate synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • DDR5SDRAM double data rate synchronous dynamic random access memory
  • the non-volatile memory may include magnetic disk storage, flash memory, and the like.
  • the disk storage device is a memory with a disk as the storage medium, which has the characteristics of large storage capacity, high data transmission rate, and long-term storage of stored data.
  • flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
  • it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc.
  • SLC single-level storage cells
  • MLC multi-level storage cells
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • embedded multimedia memory card embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
  • the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
  • the external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the electronic device 100 may include one or more keys 190 , and these keys 190 may control the electronic device and provide a user with access to functions on the electronic device 100 .
  • the key 190 may be in the form of a mechanical case such as a button, a switch, or a dial, or may be a touch or near-touch sensing device (such as a touch sensor).
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the keys 190 may include a power key, a volume key and the like.
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the electronic device 100 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, notifications and the like.
  • the electronic device 100 may also include other input and output interfaces, and other devices may be connected to the electronic device 100 through a suitable input and output interface.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the electronic device 100 is equipped with one or more sensors, including but not limited to a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E and the like.
  • sensors including but not limited to a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E and the like.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes ie, x, y and z axes
  • the gyro sensor 180B can also be used for navigation, somatosensory game scenes, camera anti-shake and so on.
  • the acceleration sensor 180C can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and can be used in somatosensory game scenes, horizontal and vertical screen switching, pedometers and other applications.
  • the electronic device 100 may track the movement of the user's head according to an acceleration sensor, a gyroscope sensor, and the like.
  • the distance sensor 180D is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180D for distance measurement to achieve fast focusing.
  • the touch sensor 180E is also called “touch device”.
  • the touch sensor 180E is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation may be provided through the display device 194 .
  • the processor 110 may be configured to determine the content that the user sees through the display device of the electronic device 100 according to the sensor data collected by the electronic device 100 .
  • the GPU is used to perform mathematical and geometric operations according to the data obtained from the processor 110 (for example, data provided by the application program), render images using computer graphics technology, computer simulation technology, etc., and determine the image for display on the electronic device 100 image.
  • the GPU may add correction or pre-distortion to the rendering process of the image to compensate or correct the distortion caused by the optical components of the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of this application uses a layered architecture
  • the system is taken as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 4 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the The system is divided into five layers, from top to bottom are applications layer (applications), application framework layer (applications framework), system library (native libraries) and Android runtime ( runtime), hardware abstraction layer (hardware abstract layer, HAL), and kernel layer (kernel).
  • the application layer can consist of a series of application packages.
  • Application packages can include applications such as Camera, Gallery, Games, WLAN, Bluetooth, Music, Video, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, activity manager, display manager, resource manager, input manager, notification manager ( notification manager), view system (views), etc.
  • a window manager is used to manage window programs.
  • the window manager can be used to draw the size and location area of the window, control the display or hide of the window, manage the display order of multiple windows, obtain the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the activity manager is used to manage the life cycle of the application's activities, such as managing activity processes such as creation, background, and destruction.
  • the display manager is used to manage the display life cycle of the application. It can decide how to control its logical display according to the currently connected physical display device, and send notifications to the system and applications when the state changes, and so on.
  • the input manager is used to monitor and manage input events in a unified manner. For example, when it is detected that the user uses the handheld controller to input, or the sensor detects the user's motion data, the input manager can monitor the call of the system and send the monitored input event For further forwarding or processing.
  • the resource manager provides applications with access to various non-code resources, such as localized strings, icons, images, layout files, video files, and so on.
  • the resource management interface class ResourceImpl can be used as an external resource management interface, through which application resources can be updated.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • the notification display interface may include a view for displaying text and a view for displaying pictures.
  • Runtime includes core library and virtual machine. The runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem, and provides fusion of two-dimensional (2-Dimensional, 2D) and three-dimensional (3-Dimensional, 3D) layers for multiple applications.
  • the surface manager can obtain the SurfaceFlinger service.
  • the SurfaceFlinger service is the core of the graphical user interface (graphical user interface, GUI), and is responsible for mixing and outputting the graphics data of all applications to the buffer stream in order.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing and layer processing, etc.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the hardware abstract layer (hardware abstract layer, HAL) is the interface layer between the operating system kernel and the hardware circuit. Its purpose is to abstract the hardware. It can encapsulate the Linux kernel driver, provide a standard interface upward, and hide Low-level driver implementation details.
  • HAL can include hardware composer (hardware composer, HWC), frame buffer (frame buffer), sensor interface, Bluetooth interface, WiFi interface, audio and video interface, etc.
  • the hardware composer (HWC) is used to buffer the graphics data stream synthesized by SurfaceFlinger for final composite display.
  • the frame buffer (frame buffer) is a graphics buffer stream synthesized by the SurfaceFlinger service.
  • the SurfaceFlinger service draws images by writing content to the frame buffer.
  • the sensor interface can be used to obtain the corresponding data from the sensor driver.
  • the kernel layer is the layer between hardware and software, which is used to provide core system services, such as security, memory management, process management, network protocol stack, driver model, etc.
  • the kernel layer can include display controller drive, sensor driver, camera driver, audio driver, video driver, etc.
  • the driver communicates with the hardware device through the bus, controls the hardware to enter various working states, obtains the value of the device-related registers, and obtains the state of the device. For example, the driver can obtain user operation events, such as sensor input, camera input, etc., and convert the events into data.
  • the software and the hardware can work together, and when the head-mounted display device 200 detects the movement of the user, a corresponding image is generated for display.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the sensor driver at the kernel layer obtains the sensor input data.
  • the input manager at the application framework layer gets sensor input events from the kernel layer.
  • the window manager may then need to update the size or position of the application window, and the activity manager updates the display activity's lifecycle. After the window manager has adjusted the size and position of the window and the active display interface, the display manager will refresh the image in the window area.
  • the surface manager will use the SurfaceFlinger service to mix the graphics data rendered by the GPU in order, generate a graphics buffer stream, and output it to the frame buffer.
  • the frame buffer then sends the graphics buffer stream to the hardware compositor of the hardware abstraction layer, and the hardware compositor performs final compositing on the graphics buffer stream.
  • the hardware compositor sends the final graphic data to the display driver of the kernel layer, and is displayed by the display device 194 .
  • the software architecture shown in this embodiment does not constitute a specific limitation to the present application.
  • the software architecture of the electronic device 100 may include more or fewer modules than shown in the figure, or combine some modules, or split some modules, or arrange different architectures.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • Fig. 5 shows a communication system 30, which may be a split system in some embodiments.
  • the communication system 30 may include multiple intelligent terminal devices, and communication connections are established between the multiple terminal devices.
  • the communication system 30 includes a head-mounted display device 200 and a host 310.
  • the host 310 can be, for example, a device with strong image processing capabilities such as a computer A or a mobile phone A, and is not limited to that shown in FIG. 3
  • the host 310 can be one or more devices with strong image processing performance, such as a server, and the host 310 can also be one or more cloud devices, such as cloud hosts/cloud servers. There is no limit to this.
  • a first connection is established between the head-mounted display device 200 and the host 310 .
  • the head-mounted display device 200 is mainly used for displaying images, and the host 310 mainly provides image calculation and processing functions, and then sends them to the head-mounted display device 200 for display through a first connection.
  • the first connection may be a wired connection or a wireless connection.
  • a terminal device may also be referred to simply as a terminal, and a terminal device is generally an intelligent electronic device that can provide a user interface, interact with a user, and provide a service function for the user.
  • Each terminal device in the communication system 30 can carry system, system, system, system (HarmonyOS, HOS) or other types of operating systems, the operating systems of each terminal device in the communication system 30 may be the same or different, which is not limited in this application.
  • multiple terminals in the communication system 30 are equipped with system, then the system composed of multiple terminals can be called Super virtual device (super virtual device), also known as
  • a hyper terminal refers to integrating the capabilities of multiple terminals through distributed technology, storing them in a virtual hardware resource pool, and uniformly managing, scheduling, and integrating terminal capabilities according to business needs to provide external services, so that different terminals Realize fast connection, mutual assistance, and resource sharing.
  • the first connection may include a wired connection, such as a high definition multimedia interface (high definition multimedia interface, HDMI) connection, a display port (DP) connection, etc., and the first connection may also include a wireless connection, such as a bluetooth (BT) connection , wireless fidelity (wireless fidelity, Wi-Fi) connection, hotspot connection, etc., to realize communication between the head-mounted display device 200 and the host 310 under the same account, no account or different account.
  • the first connection can also be an Internet connection.
  • the head-mounted display device 200 and the host 310 can log in the same account, so as to realize connection and communication through the Internet.
  • multiple terminals can also log in to different accounts, but they are connected through binding.
  • the head-mounted display device 200 and the host 310 can log in to different accounts, and the host 310 sets to bind the head-mounted display device 200 with itself in the device management application, and then connects through the device management application.
  • the embodiment of the present application does not limit the type of the first connection, and the terminals in the communication system 30 can perform data transmission and interaction through various types of communication connections.
  • terminals may also be connected and communicate in combination with any of the above methods, which is not limited in this embodiment of the present application.
  • each terminal device may be configured with a mobile communication module and a wireless communication module for communication.
  • the mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal.
  • the wireless communication module may include a Bluetooth (bluetooth, BT) module and/or a wireless local area network (wireless local area networks, WLAN) module and the like.
  • the Bluetooth module can provide one or more Bluetooth communication solutions including classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (Bluetooth low energy, BLE), and the WLAN module can provide wireless fidelity point-to-point connections (wireless fidelity peer-to-peer, Wi-Fi P2P), wireless fidelity local area networks (wireless fidelity local area networks, Wi-Fi LAN) or wireless fidelity software access point (wireless fidelity software access point, Wi-Fi softAP) One or more WLAN communication solutions.
  • Wi-Fi P2P refers to allowing devices in a wireless network to connect to each other in a point-to-point manner without going through a wireless router. In the system, it may also be called wireless fidelity direct (Wi-Fi direct).
  • Wi-Fi P2P Devices that establish a Wi-Fi P2P connection can exchange data directly through Wi-Fi (must be in the same frequency band) without connecting to a network or hotspot, and realize point-to-point communication, such as transferring files, pictures, videos and other data .
  • Wi-Fi P2P has the advantages of faster search speed and transmission speed, and longer transmission distance.
  • the communication system 30 shown in FIG. 3 is only used to assist in describing the technical solution provided by the embodiment of the present application, and does not limit the embodiment of the present application.
  • the communication system 30 may include more or fewer terminal devices, such as handheld controllers, etc.
  • This application does not make any limitations on terminal types, terminal quantities, connection methods, etc.
  • the communication system 30 includes a PC and a mobile phone.
  • MTP delay refers to the sum of the delay time generated in a cycle from the user's head movement to the optical signal of the new image correspondingly displayed on the display screen of the head-mounted display device is mapped to the human eye, including the sensor detecting the user's motion 1.
  • the central processing unit central processing unit, CPU
  • runs the application and the graphics processing unit (graphics processing unit, GPU) calculates the presented image, renders and sends a series of time.
  • the sensor is used to detect user movement information and/or external environment information, so as to ensure that the viewing angle of the screen can be switched at any time while tracking body movements.
  • the sensors such as cameras, gyroscopes, accelerometers, magnetometers, etc.
  • the handheld somatosensory controller can sensitively capture the user's movement and generate sensor data in real time, such as image data, Acceleration data, angular velocity data, etc.
  • the processor can take the sensor data generated by the sensor in the previous step as input, and obtain the user's real-time pose data through data fusion and calculation.
  • Pose refers to the position and orientation of a person or object, including posture and orientation.
  • the pose may be the head pose of the user.
  • the pose can be obtained through sensors and/or cameras in the head-mounted display device.
  • the output pose data includes quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions); If the head-mounted display device is a 6dof device, the output pose data includes the quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions) and the three-axis position (up-down, front-back, left-right movement) data.
  • the rendering module performs anti-distortion and other processing on the image, and then sends it to a processor such as GPU for drawing, coordinate transformation, view transformation, layer rendering, texture synthesis, coloring, Cropping, rasterization and other rendering processes. If it is a VR wearable device, two images corresponding to the left and right eyes can be rendered respectively.
  • image rendering includes rendering images such as color and transparency, and also includes rendering images that are rotated and/or translated based on human body pose data detected by VR wearable devices.
  • the posture detected by the VR wearable device includes multiple degrees of freedom such as rotation angle and/or translation distance, where the rotation angle includes yaw angle, pitch angle, and roll angle, and the translation distance includes relative to the three-axis direction (X axis, Y axis, Z axis) translation distance. Therefore, image rendering includes performing rotation processing on the image according to the rotation angle of the VR wearable device, and/or performing translation processing on the image according to the translation distance of the VR wearable device.
  • the rendered image objects such as mountains, water, etc.
  • mountains, water, etc. For example, mountains, water, etc.
  • the image seen by the user is linked with the user's movement and user's perspective, and the experience is better.
  • the image sending and display module is used to send the image data (multiple pixel data) rendered by processors such as GPU to the display device.
  • the display device includes a frame buffer.
  • the frame buffer can also be called a video memory, and is used to store image rendering data processed by the GPU or to be extracted soon.
  • Sending the rendered data to the display device by the electronic device may refer to sending to the frame buffer in the display device.
  • the display device may not have a frame buffer, and sending the rendered data to the display device by the electronic device may refer to directly sending image data (multiple pixel data) to the display device for display.
  • the video controller After the GPU submits the image rendering data to the frame buffer, in conjunction with the vertical sync (Vsync) signal of the screen refresh, the video controller extracts the image data of the frame buffer at a specified time point after the Vsync signal, and then the display screen, etc. After the optical display device receives the display signal, it extracts the buffer frame and displays it on the screen.
  • Vsync vertical sync
  • the display screen When the display screen is used to display screen images, it scans the screen from left to right and from top to bottom to display pixels sequentially. After one line is scanned, a horizontal sync (horizontal sync, Hsync) signal is sent, and one page is scanned. Just display a frame of picture and send out a Vsync signal, and start the scanning of the next page.
  • a horizontal sync horizontal sync, Hsync
  • the Vsync signal is a pulse signal, which is generally generated by a hardware clock, and can also be simulated by software (such as a hardware synthesizer HWC).
  • the electronic device will wait for the Vsync signal to be sent, and then render a new frame of image and update the frame buffer to solve the phenomenon of tearing the display screen and increase the smoothness of the screen. Spend. If the data of the current frame buffer is not completely updated and the data of the previous frame is still retained, then when the screen is refreshed, the frames captured from the frame buffer come from different frames, resulting in a sense of screen tearing.
  • Common display screen types include: liquid crystal display (liquid crystal display, LCD), organic light emitting diode (organic light emitting diode, OLED), plasma display panel (plasma display panel, PDP), etc. Wait.
  • liquid crystal display liquid crystal display
  • OLED organic light emitting diode
  • plasma display panel plasma display panel
  • Wait for example, for an LCD display, the arrangement of liquid crystal molecules is changed by changing the electric field (voltage), so that the light transmittance of the external light source (incident light beam) through the liquid crystal is changed, and then by adjusting the color filter ( Red, green, blue three primary color filter film), to complete the color display.
  • an organic light-emitting layer is sandwiched between the positive and negative electrodes.
  • the holes generated by the anode and the electrons generated by the cathode move and are injected into the hole transport layer and the electron transport layer respectively.
  • Migrating to the light-emitting layer when the holes and electrons meet in the light-emitting layer, energy excitons are generated, thereby exciting the light-emitting molecules to finally generate visible light, and then color filters are added to complete the color display.
  • the display principles and hardware structures of different types of display screens are not the same, and will not be repeated here. Different types of display screens do not limit this embodiment of the application.
  • the drawing of an image generally requires the cooperation of the CPU and the GPU.
  • the CPU can be responsible for calculating image-related display content, such as view creation, layout calculation, image decoding, text drawing, pose data calculation, texture data calculation, etc., and then the CPU will pass the calculated content to the GPU. Perform transformations, compositing, rendering, and more.
  • GPU can be used to be responsible for layer rendering, texture synthesis, shader rendering, etc., which can include: processing vertex data by vertex shader, such as translation, scaling, rotation, various model transformation, viewing transformation, projection transformation and other operations, will It is converted into standardized coordinates; the shape of the object is depicted by the subdivision shader and the geometry shader, and the geometry of the model is processed to make it smoother; the processed data is assembled into primitives to make the data into primitives; according to the viewport Cut the primitives and cut out the invisible parts of the viewport; rasterize the primitives to generate the coordinates of the display screen and pixel fragments; the fragment shader colors the pixel fragments; color the fragments After processor processing, texture mixing and other rendering processes are performed, and finally the pixel data of the image is obtained.
  • vertex shader such as translation, scaling, rotation, various model transformation, viewing transformation, projection transformation and other operations
  • the GPU will send the pixel data of the image to the frame buffer.
  • the video controller extracts the pixel data of the image frame in the frame buffer at a specified time, and then displays the frame image on the display screen.
  • each frame of image needs to cooperate with the Vsync signal.
  • Vsync cycle a complete frame of image needs to be rendered and sent to the frame buffer, otherwise the display screen will freeze and drop frames. , tearing, etc.
  • the AR/VR or MR device when it has a split structure, such as in the communication system 30, it may also include a data transmission module, which can be used to send or receive image-related data through a wired connection or a wireless connection .
  • a data transmission module which can be used to send or receive image-related data through a wired connection or a wireless connection .
  • the sensor module captures the movement of the user's head and generates sensor data, which is sent to the host 310 by the data transmission module, and the motion tracking algorithm module in the host 310 After the sensor data is processed, data such as pose and pose are generated, and then sent to the head-mounted display device 200 through the data transmission module.
  • the image rendering module in the head-mounted display device 200 can render the data such as pose and pose, and then render the rendered data sent to the frame buffer and finally displayed on the display.
  • the rendering module can be located on the host 310, when the user wears the head-mounted display device 200 and the head moves, the sensor module captures the movement of the user's head, generates sensor data, and sends it to the host 310 by the data transmission module.
  • the motion tracking algorithm module in 310 processes the sensor data to generate pose and other data, and the image rendering module in the host 310 can render the pose data, etc., and then send the rendered data to the headset through the data transmission module.
  • the rendered image data is finally extracted by the display sending module and displayed on the display screen.
  • the human-computer interaction performance can be improved, thereby reducing the user's dizziness, vomiting and other adverse reactions, and improving the user experience.
  • each link can be optimized to reduce the time spent on each link, such as increasing the frame rate of image acquisition, providing predicted poses, improving the rendering performance of the GPU, and turning the display
  • the refresh rate is increased to above 75Hz and so on.
  • the MTP delay can be decomposed into various links.
  • Figure 8 shows the time-consuming analysis of each module. As shown in Figure 8:
  • the sensor detection module is used to detect user movement information and/or external environment information.
  • the sensor is a camera and IMU as an example for illustration.
  • the frequency of image generation is 30Hz. Assuming that the exposure time of a frame of image is 10ms, and the time stamp of the image is in the middle of the image exposure, there is a 5ms delay between the time stamp of the image and its final imaging.
  • the sensor IMU its frequency is 1000Hz, and it can be considered that the generation of the IMU has almost no delay.
  • due to the large amount of image data there is also a certain delay between image generation and transmission to the processor. Assuming it is 5ms, the total time from sensor data generation to transmission completion needs 10ms.
  • Motion tracking algorithm module The processor can calculate the user's pose data based on the sensor data. Assuming that this step takes 5ms to complete the calculation, of course, in this module, we can reduce the time-consuming calculation through some optimization methods, and even reduce the overall MTP delay by predicting the pose, but only consider real-time for the time being The pose calculation of , that is, in this module, there is a delay of 5ms.
  • Image sending and display module the image sending and display module is used to send the rendered image frame data to the display device. Due to the large amount of image data finally sent to the display, it is assumed that it takes 8ms for the transmission to be sent to the display.
  • the total time spent on performing the above four steps in series is 31 ms, that is, the MTP delay from the user's head movement to the final display of the image is 31 ms.
  • Table 1 shows the time-consuming analysis of each step of the MTP delay in the corresponding AR scenario.
  • the generation of the above sensor data and the calculation of the pose that is, the work done by the sensor detection module and the motion tracking algorithm module are collectively referred to as the stage of image acquisition.
  • image rendering and image display can be performed.
  • the previous figure 8 can be used .
  • the steps described in Table 1 are simplified into three stages: image acquisition, image rendering, and image display, respectively corresponding to 15ms, 8ms, and 8ms time-consuming, as shown in FIG. 9 .
  • the data transmission between the host and the head-mounted display device will also take a certain amount of time.
  • the time-consuming data transmission can also be Included in the time spent in the image acquisition phase.
  • the process from image acquisition, image rendering to image display is implemented serially, and the minimum image unit of each link is a whole image, and there is redundancy in time, which can be further optimized.
  • the present application provides an image transmission and display method, which can divide a whole set of images into multiple parts, and perform parallel image rendering and/or parallel image display processing on these multiple parts , and/or transmit the multiple partial images in parallel between multiple devices, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further reducing the delay time of image transmission and display.
  • an image transmission and display method which can divide a whole set of images into multiple parts, and perform parallel image rendering and/or parallel image display processing on these multiple parts , and/or transmit the multiple partial images in parallel between multiple devices, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further reducing the delay time of image transmission and display.
  • the implementation of the method provided in this application can reduce the delay time in the image transmission and display process, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
  • the present application also provides related electronic equipment or systems, which can apply the image transmission and display method provided in the present application.
  • the electronic device may include the aforementioned head-mounted display device, and the head-mounted display device may realize display effects such as AR, VR, and MR.
  • the electronic device involved in the embodiment of the present application may also be other devices including display screens, such as terminal devices such as mobile phones, PADs, PCs, smart TVs, and vehicle-mounted devices.
  • This application does not impose any restrictions on the specific types of electronic devices.
  • the communication system 30 in the foregoing example does not impose any limitation on other embodiments of the present application.
  • the various embodiments provided by this application are mainly introduced by taking the electronic device as a head-mounted display device in a VR scene as an example, but the image transmission and display methods provided by the embodiments of this application are not limited to be applied to head-mounted display devices Devices and VR scenes, more generally, the transmission and display images of various types of electronic devices (such as mobile phones, PCs, PADs, vehicle-mounted devices, game consoles, smart wearable devices, smart home devices, Internet of Things devices, etc.) can be universally applicable to this application Provided image transmission and display methods.
  • various types of electronic devices such as mobile phones, PCs, PADs, vehicle-mounted devices, game consoles, smart wearable devices, smart home devices, Internet of Things devices, etc.
  • the image transmission and display method provided by the embodiment of this application can be applied to various business scenarios, including but not limited to:
  • the user can wear a head-mounted display device, and the head-mounted display device can use VR, AR, MR and other technologies to display images, so that users can feel the virtual environment and provide users with VR/AR/MR experience.
  • the MTP delay is relatively large, which may easily cause adverse reactions such as dizziness and vomiting of users.
  • VR/AR/MR images can be displayed faster, and the MTP delay is reduced, effectively alleviating the user's dizziness, vomiting and other adverse reactions of motion sickness, and improving the user experience.
  • the PC or large screen can more quickly transfer the The mobile phone interface is displayed, giving users a smoother screencasting experience.
  • a smart car display can generate new images based on feedback from user operations, on-board sensors, or other input information.
  • the car display can display corresponding images more quickly. images, giving users a faster and more comfortable driving experience.
  • the scenarios described above are only examples for illustration, and do not constitute any limitation to other embodiments of the present application. Not limited to the above scenarios, the image transmission and display method provided by the embodiment of the present application can be applied to other social scenarios, office scenarios, shopping scenarios and other scenarios that require image display.
  • the image transmission and display technology solution provided by this application is to optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing the waiting time in the process of image transmission and display Time, further reducing the delay time of image transmission and display.
  • it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience.
  • Fig. 10 illustrates the process of parallel rendering and displaying provided by some embodiments.
  • the head-mounted electronic device 200 After acquiring the sensor data and calculating the pose data by the processor, the head-mounted electronic device 200 acquires the image data of a whole set of images for rendering. As shown in FIG. 11 , it is assumed that the whole image of the first image is horizontally divided into four parts from top to bottom, which are respectively the first Slice1 , the second Slice2 , the third Slice3 , and the fourth Slice4 .
  • a whole set of images is divided into Slice1, Slice2, Slice3, and Slice4 on average, wherein, the rendering time of a whole set of images is 8ms, and the time for displaying is 8ms, then the average divided Slice1, Slice2, Slice3, The rendering time of each image in Slice4 is 2ms, and the displaying time is 2ms.
  • the GPU and other processors first render Slice1, which takes 2ms.
  • the rendered image pixel data is sent to the display device (such as a display device such as a display screen), which can also be referred to as sending display, and takes 2ms.
  • the display device includes a frame buffer, so sending for display may refer to sending the rendered image pixel data to the frame buffer; while Slice1 is rendered and sent for display, processors such as GPU render Slice2, which takes 2ms, and then send to display. It takes 2ms; while Slice2 is rendered and sent to the display, the GPU and other processors render Slice3, which takes 2ms, and then sent to the display, which takes 2ms; after Slice3 is rendered and sent to the display, the GPU and other processors render Slice4. It takes 2ms, and then it is sent to the display, which takes 2ms.
  • the display screen includes a frame buffer, and after sending and displaying Slice1, Slice2, Slice3, and Slice4, the display screen extracts the complete data of the first image from the frame buffer for display in conjunction with the Vsync signal.
  • the display screen extracts the pixel data of the first image from the frame buffer and completely displays the first image.
  • the display screen does not have a frame buffer, as shown in Figure 12, when Slice1 is sent and displayed, the display screen directly displays the image of Slice1; when Slice2 is sent and displayed, the display screen displays the images of Slice1 and Slice2 ; After sending and displaying Slice3, the display screen displays the images of Slice1, Slice2 and Slice3; when sending and displaying Slice4, the display screen displays the images of Slice1, Slice2, Slice3, and Slice4, that is, the first image is completely displayed.
  • This embodiment does not impose any restrictions on the way of dividing each image, which can be divided into any number such as four or six, can be divided horizontally, vertically or in any direction, and can be divided evenly or unevenly. Usually, dividing the image evenly can make full use of the parallel processing capability of each module and save more time than dividing the image unevenly.
  • the size of each slice is basically the same or the same; when divided unevenly, the size of each slice can be different.
  • the rendering time of an entire image is 8ms
  • the displaying time is 8ms.
  • the image is divided into the fifth slice Slice5, the sixth slice Slice6, and the sixth slice in a ratio of 2:3:1:2.
  • Seven slices of Slice7 and eighth slice of Slice8 the rendering time of each image is 2ms, 3ms, 1ms, 2ms respectively, and the time of sending to display is 2ms, 3ms, 1ms, 2ms respectively.
  • GPU and other processors first render Slice5, which takes 2ms. After rendering Slice5, it takes 2ms to send it to the display.
  • the GPU and other processors render Slice6, which takes 3ms. , it takes 3ms; while Slice6 is finished rendering, GPU and other processors render Slice7, which takes 1ms. It takes 2ms for GPU and other processors to render Slice8. After Slice7 is sent to display, it takes 2ms to send Slice8 to display.
  • the specific way to divide the image can be customized by the developer according to the actual situation.
  • the best division scheme is when the efficiency of the GPU is maximized, that is, when the power consumption and performance are in a better balance, the total time spent on image rendering plus sending to display Shorter image division scheme.
  • the VR glasses will render and display the image after acquiring the image.
  • Image distortion is a phenomenon in which normal images are distorted due to the inherent characteristics of optical lenses (such as convex lenses) in VR glasses. The reason for this is that light rays are more curved away from the center of the lens than closer to the center of the lens. Distortion is distributed along the radius of the lens, including barrel distortion and pincushion distortion.
  • Image pre-distortion is to counteract the image distortion effect of the optical lens in the VR glasses, and perform reverse distortion preprocessing on the normal image in advance, so that the image that the user finally sees is a normal image.
  • the image distortion of the optical lens in VR glasses can form a normal image into a pincushion distortion effect, so in the image rendering stage, image pre-distortion processing can be performed in advance to turn the normal image into a barrel distortion. Then, after the barrel-shaped distorted image is changed by the pincushion-shaped distortion effect of the optical lens in the VR glasses, the image displayed in the user's eyes is a normal image.
  • Time warp is a technology of image frame correction.
  • the rendering of the scene is delayed because the head moves too fast, that is, the user's head has been turned, but the image has not been rendered, or the image of the previous frame is rendered. If the rendering time is too long, a frame will be lost and the result will be a jittery image.
  • Time Warp Alleviates rendering lag by warping an image before it is sent to the display. The most basic time warp is based on the direction of the warp, which corrects the image shake caused by the rotation and posture of the head, and it can generate a new image frame with less computing resources.
  • Time warping can generate an image to replace the frame that has not been rendered when the image rendering frame is not synchronized with the head movement, that is, automatically fill the image frame, so that the front and rear image frames transition smoothly.
  • each Slice is a 1 ⁇ 4 grid, as shown in (b) in Figure 14.
  • Each Slice needs to be pre-distorted. For each image vertex in the Slice, the pixel position after distortion corresponding to the vertex can be calculated through the pixel position before distortion in the original image and the distortion formula.
  • Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to calculate the distorted pixel position, and finally generate the pre-distortion effect shown in (b) in Figure 14.
  • interpolation methods such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.
  • time warping is also performed on the image.
  • the collected images are rotated accordingly to obtain a new image frame as the final display image, as shown in (c) in Figure 14 after pre-distortion and image time warping.
  • the predicted pixel position corresponding to the vertex can be obtained through calculation.
  • Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to generate predicted pixel positions.
  • each image After each image is rendered in the GPU, it will be sent to the display device, referred to as sending to the display.
  • the hardware Vsync signal is a pulse signal sent after the display refreshes a whole frame of images. If the fragmented image rendering and display process does not match the Vsync signal, the display may only refresh a part of the image, and the unrefreshed part displays the part of the previous frame of image, resulting in screen tearing.
  • SurfaceFlinger can synthesize the image data in the buffer and then send it to the display device, or a hardware composer (hardware composer, HWC) can use hardware to complete the synthesis of the image data and send it to the display.
  • the Vsync signal matches the refresh rate of the screen. When a Vsync signal arrives, the screen starts to refresh pixels from top to bottom and from left to right. What's more, the refreshing of each row can also cooperate with a horizontal sync (horizontal sync, Hsync) signal, and the pixels of each row can cooperate with a pixel clock (pixel clock, PCLK) signal for transmission.
  • a horizontal sync horizontal sync
  • Hsync horizontal sync
  • PCLK pixel clock
  • each pixel data will be written from left to right and from top to bottom, and a screen refresh will be completed within one Vsync cycle.
  • the electronic device can control the display time, and obtain the pixel at a fixed position in the cache at a fixed time point within a cycle to display it. For example, if a time point is specified, the electronic device believes that the GPU can complete the rendering of the image before this time point, then when the time point arrives, the HWC of the electronic device will send the pixels of the image to the fixed position of the display screen in order. show.
  • black insertion technology is also involved in the display process.
  • the black insertion technology is to prevent the smear phenomenon caused by the visual retention of the human eye.
  • the electronic device synchronously completes the rendering, displaying and displaying of a whole frame of images.
  • image rendering may be performed in advance in order to complete the display sending in time before the display signal arrives.
  • the electronic device can adjust the brightness of the entire display screen to zero to implement black insertion, and the time is about 80% of a Vsync period. For example, if the refresh rate of the display screen is 90Hz and its Vsync period is 11.1ms, then the black insertion time is about 8.9ms.
  • the electronic device When the black is inserted and the display is turned on, the electronic device needs to ensure that all image frames to be displayed have been refreshed on the screen, so that the user can see a new and complete image.
  • the electronic device needs to render and send all the sliced images (that is, Slice1 , Slice2 , Slice3 , and Slice4 ) to display.
  • Slice1 is sent to display; while Slice1 is sent to display, Slice2 is rendered.
  • Slice2 is sent to display; while Slice2 is sent to display, Slice3 is rendered.
  • Slice3 is sent to display; while Slice3 is sent to display, Slice4 is rendered.
  • the black insertion is completed, the display screen lights up, and the display screen displays a complete pair of images composed of Slice1, Slice2, Slice3, and Slice4.
  • the time periods of image rendering and image sending and display are both shorter than one Vsync period.
  • turning off the display of the display may include any of the following situations: 1 Turn off the backlight power supply of the display. 2 Turn off the backlight power supply of the display screen and the power supply of the display panel. 3Turn off the display backlight power supply, display panel, screen driver integrated circuit (integrated circuit, IC) and backlight driver IC.
  • the processor When controlling the backlight driver IC to turn off the backlight power supply, the processor still sends display data to the display panel through the screen driver IC and the backlight driver IC, but because the backlight is turned off, the display screen cannot display images. Since the display data is always sent to the display panel, the restoration of backlight power supply is fast.
  • the display panel cannot receive the display data sent by the screen driver IC, and the initial configuration data of the display panel is lost.
  • the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Therefore, the speed of restoring the display on the display screen is relatively slow, which can save the power consumption of the display panel.
  • the response speed of the display screen to restore display is not high, the display panel can be turned off to further save power consumption.
  • the backlight driver IC cannot receive the backlight data sent by the processor.
  • the backlight driver IC also cannot receive the color data sent by the screen driver IC.
  • the power supply of the backlight driver IC is restored again, it is similarly necessary to initialize the configuration of the backlight driver IC.
  • the screen driver IC cannot receive the display data sent by the processor, nor can it send color data to the backlight driver IC.
  • initial configuration of the screen driver IC is also required. Therefore, the speed of restoring the display display is slow.
  • turning off the display screen display may include any of the following situations: 1 Turning off the power supply of the OLED display panel. 2 Turn off the OLED display panel power supply and screen driver IC power supply.
  • the processor when the processor controls to turn off the power supply of the OLED display panel, the processor still sends display data to the screen driver IC, but since the power supply of the OLED display panel is turned off, the OLED display cannot display images.
  • the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Since the display data is always sent to the OLED display panel, the restoration of the power supply of the OLED display panel is fast.
  • the screen driver IC When the screen driver IC is controlled to be turned off, the screen driver IC cannot receive display data sent by the processor, nor can it send display data to the OLED display panel. Similarly, when the power supply of the screen driver IC is restored, initial configuration of the screen driver IC is also required. It is slow to restore the power supply of the screen driver IC.
  • the processor may control the power supply of pixels in a part of the OLED display panel to be turned off. Then the image cannot be displayed in this part of the area. It can be realized to turn off the display of some areas on the display screen.
  • the image displayed on the display screen can become a virtual image in the user's eyes.
  • the focal point corresponding to the virtual image can be set within a certain distance from the front of the user's eyes, such as 2 meters or 4 meters, through the optical design of the display screen. The distance may also be a distance interval, such as 2-4 meters. Then the image displayed on the display screen appears to the user's eyeballs as being imaged on the fixed focal point in front of the user's eyeballs.
  • the foregoing embodiments may be applied to a case where an electronic device independently displays an image, and may also be applied to a case where multiple devices cooperate to display an image.
  • FIG. 16 shows a communication system 40 based on wireless transmission.
  • the communication system 40 includes a computer A and a head-mounted display device 200 , and data is transmitted between the computer A and the head-mounted display device 200 through a wireless connection 41 .
  • the image rendering can be completed by the computer A, or by the head-mounted display device 200 .
  • the image rendering portion is performed by computer A.
  • the head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then computer A can divide a whole frame of image into slices, render each slice of image in parallel, encode each slice of image after rendering, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives each piece of image data from the computer A through the wireless connection 41 , each piece of image data is decoded and displayed in parallel, and finally displayed.
  • processes from generating sensor data, pose calculation, etc. to obtaining a frame of image to be rendered are synthesized into image acquisition.
  • the computer A After the computer A obtains the entire frame of the first image to be rendered, it can divide the entire frame of the first image into four parts, namely Slice1, Slice2, Slice3, and Slice4.
  • Computer A renders Slice1 first, and then encodes Slice1 after rendering. After the rendering of Slice1 is completed, while encoding Slice1, computer A renders Slice2, and encodes Slice2 after rendering. Computer A encodes Slice1 and then transmits it to the head-mounted display device 200 through the wireless connection 41 . After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . After the rendering of Slice2 is completed, while encoding Slice2, computer A renders Slice3, and encodes Slice3 after rendering. By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 .
  • Computer A After the rendering of Slice3 is completed, while encoding Slice3, computer A renders Slice4, and encodes Slice4 after rendering. After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
  • the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 sends and displays Slice1. After decoding Slice1, while sending and displaying Slice1, the head-mounted display device 200 decodes the received Slice2.
  • the head-mounted display device 200 decodes the received Slice3.
  • Slice3 is decoded, Slice3 is sent for display, and at the same time as Slice3 is sent for display, the head-mounted display device 200 decodes the received Slice4.
  • Slice4 is decoded, Slice4 is sent for display.
  • the head-mounted display device 200 can display the complete first image on the display screen.
  • the image rendering part is completed by the head-mounted display device 200 .
  • the head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then the computer A can divide a whole frame of images into slices, encode each slice of images, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives various images from the computer A through the wireless connection 41 , it decodes, renders, and displays each image in parallel, and finally displays them.
  • the processes from generating sensor data, pose calculation, etc. to obtaining a frame of image that has not yet been rendered are synthesized into image acquisition.
  • the computer A After the computer A acquires the whole frame of the first image, it can divide the whole frame of the first image into four parts, namely Slice1, Slice2, Slice3 and Slice4.
  • the computer A encodes Slice1 first, and then transmits the encoding to the head-mounted display device 200 through the wireless connection 41 after the encoding is completed. After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 . After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
  • the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 renders and displays Slice1. After decoding Slice1, while rendering Slice1, the head-mounted display device 200 decodes the received Slice2.
  • the head-mounted display device 200 decodes the received Slice3.
  • the head-mounted display device 200 decodes the received Slice3.
  • Slice3 is rendered and displayed.
  • the head-mounted display device 200 decodes the received Slice4.
  • Slice4 is decoded, Slice4 is rendered and displayed.
  • the head-mounted display device 200 can display the complete first image on the display screen.
  • the image codec may adopt technologies such as H.265, aiming at compressing image size and speeding up image transmission.
  • the encoding step, transmission step, and decoding step can also be synthesized into a transmission step.
  • both the sending end (computer A) and the receiving end (head-mounted display device 200) perform parallel processing on each stage (encoding, transmission, decoding, rendering, and display) of the sliced image, which greatly shortens the processing time.
  • the delay time in the image transmission and display process the image can be displayed faster.
  • the sending end and receiving end shown in the communication system 40 may also be any other types of electronic devices.
  • the connection type between the sending end and the receiving end is not limited to wireless connection, and may also be wired connection or other connections.
  • the content transmitted between the sending end and the receiving end is not limited to pictures, but can also be files, videos, etc.
  • the way of dividing an image is not limited to this example, either. Other scenarios based on the same scheme are within the protection scope of this application.
  • a communication system composed of a first device and a second device is taken as an example for description.
  • the first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images.
  • image rendering is completed by the first device.
  • the embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario.
  • the first device is the aforementioned computer A
  • the second device is the aforementioned head-mounted display device 200
  • the first device and the second device form a communication system 40
  • the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating systems carried by the first device and the second device.
  • the software system of the first device or the second device includes but is not limited to or other operating systems.
  • Fig. 19 is a flow chart of the image transmission and display method provided by Embodiment 1, which specifically includes the following steps:
  • a first device establishes a first connection with a second device.
  • the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi
  • the connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account.
  • the embodiment of the present application does not limit the type of the first connection.
  • the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
  • the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
  • the data transmitted in this embodiment is data corresponding to multiple frames of images.
  • this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device.
  • images, video, audio, text, etc. can also be transmitted.
  • the second device obtains the third image, and divides the third image into a first image and a second image.
  • the third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself.
  • the acquisition of the third image in this embodiment The source and acquisition process are not limited.
  • the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the second image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the second device encodes the first image.
  • codec can be performed before and after image transmission.
  • the first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
  • the second device sends the first image to the first device.
  • the second device After the second device finishes encoding the first image, it sends the first image to the first device.
  • the second device encodes the second image.
  • the second device After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S104 and step S105 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
  • the second device sends the second image to the first device.
  • the second device After the second device finishes encoding the second image, it sends the second image to the first device.
  • the first device decodes the first image.
  • the first device After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S106 and step S107 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
  • the first device renders the first image.
  • the first device After decoding the first image, the first device renders the first image.
  • the first device decodes the second image.
  • Step S108 and step S109 occur simultaneously, that is, the rendering of the first image by the first device and the decoding of the second image by the first device are processed in parallel.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device renders the second image.
  • the first device After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S110 and step S111 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
  • a communication system composed of a first device and a second device is taken as an example for description.
  • the first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images.
  • image rendering is completed by the second device.
  • the embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario.
  • the first device is the aforementioned computer A
  • the second device is the aforementioned head-mounted display device 200
  • the first device and the second device form a communication system 40
  • the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating systems carried by the first device and the second device.
  • the software system of the first device or the second device includes but is not limited to or other operating systems.
  • Fig. 20 is a flow chart of the image transmission and display method provided by Embodiment 2, which specifically includes the following steps:
  • the first device establishes a first connection with the second device.
  • the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi
  • the connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account.
  • the embodiment of the present application does not limit the type of the first connection.
  • the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
  • the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
  • the data transmitted in this embodiment is data corresponding to multiple frames of images.
  • this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device.
  • images, video, audio, text, etc. can also be transmitted.
  • the second device obtains the third image, and divides the third image into a first image and a second image.
  • the third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself.
  • the acquisition of the third image in this embodiment The source and acquisition process are not limited.
  • the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the second image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the second device renders the first image.
  • the second device encodes the first image.
  • codec can be performed before and after image transmission.
  • the first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
  • the second device renders the second image.
  • Step S204 and step S205 occur simultaneously, that is, the encoding of the first image by the second device and the rendering of the second image by the second device are processed in parallel.
  • the second device sends the first image to the first device.
  • the second device After the second device finishes encoding the first image, it sends the first image to the first device.
  • the second device encodes the second image.
  • the second device After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S206 and step S207 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
  • the second device sends the second image to the first device.
  • the second device After the second device finishes encoding the second image, it sends the second image to the first device.
  • the first device decodes the first image.
  • the first device After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S208 and step S209 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device decodes the second image.
  • the first device After the first device finishes decoding the first image, the first device starts to decode the second image. At the same time, the first device is sending the first image for display. Step S210 and step S211 occur simultaneously, that is, the first device sends the first image to display, and the first device decodes the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
  • the first device includes display means for displaying images. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application.
  • the first device is the aforementioned head-mounted display device 200 all-in-one, which can display images using technologies such as VR, AR, and MR, so that users can feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating system carried by the first device, including but not limited to or other operating systems.
  • Fig. 21 is a flowchart of the image transmission and display method provided by the third embodiment, which specifically includes the following steps:
  • the first device acquires a third image, and divides the third image into a first image and a second image.
  • the third image acquired by the first device may be the first image generated by the first device based on the sensor data fed back by each sensor, or the third image received from the cloud server.
  • the third image There are no restrictions on the source and process of acquisition.
  • the GPU of the first device may divide the third image, that is, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the first image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the first device renders a first image.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • display sending If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device renders the second image.
  • the first device After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S303 and step S304 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image for display. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image.
  • the first device displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (rendering, displaying, etc.) of different parts of the image can be processed in parallel, thereby Reduce the delay time in the image transmission and display process, and the image can be displayed faster.
  • Embodiment 4 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal. Its schematic process is shown in Figure 22.
  • the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal.
  • the first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
  • the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1
  • the time interval between the first display signal and the second display signal is T2
  • T1 and T2 are equal.
  • the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1.
  • the display device in response to the first vertical synchronization signal and the second vertical synchronization signal, starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
  • the first image signal is an image signal of a rendered first image
  • the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
  • the first device when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
  • the first device before the first device transmits the first image signal to the display apparatus, the first device renders the first image.
  • the first device further includes an image rendering module, the image rendering module is used to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback signal is used for Indicates the vertical synchronization information of the display device.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the first device before the first device transmits the first image signal to the display device, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
  • the first device when the first device transmits the first image signal to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device.
  • the fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
  • both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the first device in response to the second display signal, displays a third image, where the third image includes the first image and the second image.
  • the first device in response to the first image signal, displays the first image. In response to the second image signal, the first device displays the second image.
  • the display device further includes a frame buffer for storing pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • Embodiment 5 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 22.
  • a whole image can be divided into multiple parts, and the multiple parts can be sent to display in parallel, so that the display path can be optimized, thereby reducing the waiting time in the process of image transmission and display, The delay time of image transmission and display is further reduced, and images can be displayed faster.
  • the first device before the first device transmits the first image to the display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
  • the first device when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the first device before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
  • the first device when the first device transmits the first image to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device.
  • the fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
  • the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • Embodiment 6 provides an image transmission and display method, which may include: establishing a connection between a first device and a second device.
  • the second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection.
  • the first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device.
  • the display device displays the third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 25.
  • the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then performs the processing of sending the multiple parts of the images in parallel, so that the display path is obtained. Optimization, so as to reduce the waiting time in the process of image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
  • the second device when the first device transmits the first image to the display apparatus, the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the sixth time is the time taken by the second device to transmit the second image to the first device.
  • the fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device. Its schematic process is shown in Figure 25.
  • the second device before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the second device when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the method further includes: after the second device transmits the first image to the first device through the connection and before the first device transmits the first image to the display device, the first device renders the first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image. Its schematic process is shown in Figure 24.
  • the first device when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the embodiment of the present application also provides an electronic device, which may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs.
  • Computer-executable instructions are stored in the memory, and the display device is used to display images.
  • the processor executes the instructions, the electronic device can realize any function of the first device in Embodiment 4 or Embodiment 5.
  • the embodiment of the present application also provides a computer storage medium, where a computer program is stored in the storage medium, and the computer program includes executable instructions.
  • the executable instructions When executed by a processor, the processor executes the method described in Embodiment 4 or Operations corresponding to the method provided in Embodiment 5.
  • An embodiment of the present application further provides a computer program product, which, when the computer program product is run on an electronic device, causes the electronic device to execute any possible implementation manner in Embodiment 4 or Embodiment 5.
  • the embodiment of the present application also provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions to make the electronic device implement the fourth or fifth embodiment. any of the possible implementations.
  • the embodiment of the present application also provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the fourth or fifth embodiment.
  • the delay time in the process of image transmission and display can be reduced, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
  • scenarios such as PC screen display images, vehicle-mounted device display images, etc.
  • the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente demande concerne un procédé de transmission et d'affichage d'image et un dispositif et un système associés. Une image entière peut être segmentée en une pluralité de parties pour un traitement parallèle de transmission, de rendu et d'affichage, de telle sorte qu'un trajet d'affichage est optimisé, la durée d'attente dans le processus de transmission et d'affichage d'image est raccourcie, et la durée de retard de transmission et d'affichage d'image est encore raccourcie. En outre, dans le scénario de AR/VR ou de MR, le retard de MTP peut être réduit, les réactions indésirables de mal des transports telles que la vertige et les vomissements d'un utilisateur sont efficacement soulagées, et l'expérience d'utilisation est améliorée.
PCT/CN2022/091722 2021-05-31 2022-05-09 Procédé de transmission et d'affichage d'image et dispositif et système associés WO2022252924A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110602850.9A CN115480719A (zh) 2021-05-31 2021-05-31 图像传输与显示方法、相关设备及系统
CN202110602850.9 2021-05-31

Publications (1)

Publication Number Publication Date
WO2022252924A1 true WO2022252924A1 (fr) 2022-12-08

Family

ID=84323858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091722 WO2022252924A1 (fr) 2021-05-31 2022-05-09 Procédé de transmission et d'affichage d'image et dispositif et système associés

Country Status (2)

Country Link
CN (1) CN115480719A (fr)
WO (1) WO2022252924A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708753A (zh) * 2022-12-19 2023-09-05 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
CN117687772A (zh) * 2023-07-31 2024-03-12 荣耀终端有限公司 一种算法调度方法及电子设备
CN117726923A (zh) * 2024-02-05 2024-03-19 河北凡谷科技有限公司 一种基于特定模型的图像通信系统
WO2024146349A1 (fr) * 2023-01-03 2024-07-11 华为技术有限公司 Procédé et appareil de traitement d'image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106572353A (zh) * 2016-10-21 2017-04-19 上海拆名晃信息科技有限公司 用于虚拟现实的无线传输方法、装置、终端和头显设备
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109920040A (zh) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 显示场景处理方法和装置、存储介质
CN111586391A (zh) * 2020-05-07 2020-08-25 中国联合网络通信集团有限公司 一种图像处理方法、装置及系统
CN112104855A (zh) * 2020-09-17 2020-12-18 联想(北京)有限公司 一种图像处理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106572353A (zh) * 2016-10-21 2017-04-19 上海拆名晃信息科技有限公司 用于虚拟现实的无线传输方法、装置、终端和头显设备
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109920040A (zh) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 显示场景处理方法和装置、存储介质
CN111586391A (zh) * 2020-05-07 2020-08-25 中国联合网络通信集团有限公司 一种图像处理方法、装置及系统
CN112104855A (zh) * 2020-09-17 2020-12-18 联想(北京)有限公司 一种图像处理方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708753A (zh) * 2022-12-19 2023-09-05 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
CN116708753B (zh) * 2022-12-19 2024-04-12 荣耀终端有限公司 预览卡顿原因的确定方法、设备及存储介质
WO2024146349A1 (fr) * 2023-01-03 2024-07-11 华为技术有限公司 Procédé et appareil de traitement d'image
CN117687772A (zh) * 2023-07-31 2024-03-12 荣耀终端有限公司 一种算法调度方法及电子设备
CN117726923A (zh) * 2024-02-05 2024-03-19 河北凡谷科技有限公司 一种基于特定模型的图像通信系统
CN117726923B (zh) * 2024-02-05 2024-05-14 河北凡谷科技有限公司 一种基于特定模型的图像通信系统

Also Published As

Publication number Publication date
CN115480719A (zh) 2022-12-16

Similar Documents

Publication Publication Date Title
WO2022252924A1 (fr) Procédé de transmission et d'affichage d'image et dispositif et système associés
CN110199267B (zh) 利用数据压缩进行实时图像转换的无缺失的高速缓存结构
KR101979564B1 (ko) 가상 현실 시스템에서 모션-대-포톤 레이턴시 및 메모리 대역폭을 감소시키기 위한 시스템들 및 방법들
US10692274B2 (en) Image processing apparatus and method
CN110494823B (zh) 利用多个lsr处理引擎的用于实时图像变换的无丢失高速缓存结构
WO2020093988A1 (fr) Procédé de traitement d'image et dispositif électronique
US20170150139A1 (en) Electronic device and method for displaying content according to display mode
CN113671706A (zh) 头戴式显示器的图像显示方法及设备
WO2022095744A1 (fr) Procédé de commande d'affichage vr, dispositif électronique et support de stockage lisible par ordinateur
CN111103975B (zh) 显示方法、电子设备及系统
CN112241199B (zh) 虚拟现实场景中的交互方法及装置
WO2021147465A1 (fr) Procédé de rendu d'image, dispositif électronique, et système
CN112004041A (zh) 视频录制方法、装置、终端及存储介质
WO2021052488A1 (fr) Procédé de traitement d'informations et dispositif électronique
WO2023082980A1 (fr) Procédé d'affichage et dispositif électronique
CN116708696B (zh) 视频处理方法和电子设备
EP4390643A1 (fr) Procédé de prévisualisation, dispositif électronique et système
WO2023001113A1 (fr) Procédé d'affichage et dispositif électronique
WO2022233256A1 (fr) Procédé d'affichage et dispositif électronique
CN118642593A (zh) 眼动追踪的帧率调整方法及相关装置
KR102679047B1 (ko) 전자 장치 및 전자 장치 제어 방법
CN110494840A (zh) 电子设备和用于电子设备的屏幕图像显示方法
WO2023202445A1 (fr) Système de démonstration, procédé, interface graphique et appareil associé
KR102405385B1 (ko) 3d 컨텐츠를 위한 여러 오브젝트를 생성하는 방법 및 시스템
WO2023179442A1 (fr) Procédé et appareil d'affichage 3d

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22814971

Country of ref document: EP

Kind code of ref document: A1