WO2022252924A1 - Image transmission and display method and related device and system - Google Patents

Image transmission and display method and related device and system Download PDF

Info

Publication number
WO2022252924A1
WO2022252924A1 PCT/CN2022/091722 CN2022091722W WO2022252924A1 WO 2022252924 A1 WO2022252924 A1 WO 2022252924A1 CN 2022091722 W CN2022091722 W CN 2022091722W WO 2022252924 A1 WO2022252924 A1 WO 2022252924A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
display
signal
display device
transmits
Prior art date
Application number
PCT/CN2022/091722
Other languages
French (fr)
Chinese (zh)
Inventor
单双
沈钢
毛春静
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022252924A1 publication Critical patent/WO2022252924A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present application relates to the field of terminal technology, and in particular to an image transmission and display method, related equipment and system.
  • AR devices can superimpose and display virtual images for users while viewing real-world scenes, and users can also interact with virtual images to achieve the effect of augmented reality.
  • a VR device can simulate a three-dimensional (3D) virtual world scene, and can also provide a visual, auditory, tactile or other sensory simulation experience to make users feel as if they are in the scene.
  • the user can also interact with the simulated virtual world scene.
  • MR combines AR and VR, which can provide users with a vision after merging the real and virtual worlds.
  • a head-mounted display device referred to as a head-mounted display device, is a display device worn on a user's head, which can provide a new visual environment for the user.
  • Head-mounted display devices can present AR, VR, MR and other display effects to users by emitting optical signals.
  • An important factor affecting the user experience of AR/VR or MR head-mounted display devices is the motion to photos (MTP) latency.
  • MTP latency refers to the sum of the delay time generated in a cycle from the user's head movement to the display of the corresponding new image on the device screen.
  • MTP delay is a physiological reaction of the human body. It means that when the user experiences AR/VR or MR, because the picture is dynamic, the brain thinks that the change of the picture is due to some kind of movement of the body, but the control of body balance and coordination The ear vestibular organ does not feed back the signal of body movement to the brain, which leads to incoordination between the visual information and movement felt by the brain, which leads to dizziness and vomiting in the user, which is called motion sickness.
  • the current industry-recognized MTP delay of less than 20 milliseconds (ms) can greatly reduce the occurrence of motion sickness.
  • This application provides an image transmission and display method and related equipment, which can optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing image transmission and display.
  • the waiting time in the process further reduces the delay time of image transmission and display.
  • the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal.
  • the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal.
  • the first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
  • the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1
  • the time interval between the first display signal and the second display signal is T2
  • T1 is equal to T2 .
  • the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1 .
  • the display device in response to the first vertical synchronization signal and the second vertical synchronization signal, starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
  • the first image signal is an image signal of a rendered first image
  • the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
  • the first device before the first device transmits the first image signal to a display apparatus, the first device renders the first image.
  • the first device further includes an image rendering module, the image rendering module is configured to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback The signal is used to indicate the vertical synchronization information of the display device.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
  • the first device before the first device transmits the first image signal to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
  • both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the first device in response to the second display signal, displays a third image, where the third image includes the first image and the second image.
  • the first device in response to the first image signal, displays the first image. In response to the second image signal, the first device displays the second image.
  • the display device further includes a frame buffer, where the frame buffer is used to store pixel data of the first image and the second image.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image.
  • the first device before the first device transmits the first image to a display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
  • the first device when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
  • the first device before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
  • the first device when the first device transmits the first image to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
  • the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the present application provides an image transmission and display method, and the method may include: establishing a connection between a first device and a second device.
  • the second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection.
  • the first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device.
  • the display device displays the third image, wherein the third image includes the first image and the second image.
  • the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then processes the multiple parts for display in parallel, so that the display channel It is optimized to reduce the waiting time during image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
  • the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the sixth time is the time taken by the second device to transmit the second image to the first device. The fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device.
  • the second device before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
  • the second device when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
  • the method further includes: after the second device transmits the first image to the first device through the connection, and before the first device transmits the first image to the display device, the second A device renders a first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image.
  • the first device when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time.
  • the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
  • the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the embodiment of the present application provides an electronic device, and the electronic device may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs.
  • Computer-executable instructions are stored in the memory, and the display device is used to display images.
  • the processor executes the instructions, the electronic device can realize any function of the first device in the first aspect or the second aspect.
  • an embodiment of the present application provides a computer storage medium, in which a computer program is stored, and the computer program includes executable instructions, and when executed by a processor, the executable instruction causes the processor to perform the following steps: Operations corresponding to the method provided in the first aspect or the second aspect.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute any possible implementation manner in the first aspect or the second aspect.
  • the embodiment of the present application provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions so that the electronic device implements the first aspect or Any possible implementation in the second aspect.
  • the embodiment of the present application provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the first aspect or the second aspect .
  • Implementing the above aspects provided by the embodiments of the present application can reduce the delay time in the process of image transmission and display, and images can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the above-mentioned method provided by this application can be applied to more scenarios, such as various image transmission and display scenarios such as electronic equipment projecting and displaying images, vehicle-mounted equipment displaying images, etc., through fragmented transmission, rendering, and display
  • the parallel processing method speeds up the end-to-end display process, reduces delay, and improves the user's viewing experience.
  • FIG. 1 is a schematic structural diagram of a head-mounted display device
  • FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device
  • FIG. 3 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the software structure of the electronic device provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process module provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image display process
  • Fig. 8 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of image division provided by the embodiment of the present application.
  • Fig. 12 is an image slice sending display diagram provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a rendering process provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a communication system provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application.
  • FIG. 19 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • FIG. 20 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • FIG. 21 is a flow chart of an image transmission and display method provided by an embodiment of the present application.
  • Fig. 22 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 23 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 24 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • Fig. 25 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application.
  • FIG. 26 is a schematic diagram of a process of an image transmission and display method provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • plurality refers to two or more than two.
  • the electronic device involved in the embodiment of the present application may be a head-mounted display device, worn on the user's head, and the head-mounted display device may realize display effects such as AR, VR, and MR.
  • the appearance form of the head-mounted display device may be glasses, a helmet, an eye mask, etc., which is not limited in this embodiment of the present application.
  • the electronic device involved in the embodiment of the present application may also be other devices including a display screen, such as a mobile phone, a personal computer (personal computer, PC), a tablet computer (portable android device, PAD) , a smart TV, or a vehicle-mounted device, etc., the embodiment of the present application does not limit the type of the electronic device.
  • the virtual object displayed by the electronic device can interact with the user.
  • the user can directly interact with the virtual object displayed on the electronic device through somatosensory interaction methods such as hand/arm movement, head movement, and eyeball rotation.
  • the electronic device can be used together with the handheld device, and the user can interact with the virtual object displayed on the electronic device by manipulating the handheld device.
  • the handheld device may be, for example, a handle, controller, gyro mouse, stylus, or other handheld computing device.
  • Handheld devices can be configured with various sensors such as acceleration sensors, gyroscope sensors, magnetic sensors, etc., which can be used to detect and track their own motion.
  • Handheld devices can communicate with electronic devices through short-distance transmission technologies such as wireless fidelity (Wi-Fi), Bluetooth (bluetooth), near field communication (NFC), ZigBee, etc. (universal serial bus, USB) interface or custom interface and other wired connections for communication.
  • Wi-Fi wireless fidelity
  • Bluetooth blue
  • NFC near field communication
  • ZigBee ZigBee
  • USB universal serial bus
  • FIG. 1 shows a schematic structural diagram of a head-mounted display device 200 .
  • the head-mounted display device 200 may include a left lens 201, a right lens 202, a left display 203, a right display 204, a left camera 205, a right camera 206 and an inertial measurement unit (inertial measurement unit, IMU) Some or all of 207 etc.
  • IMU inertial measurement unit
  • both the left lens 201 and the right lens 202 may be convex lenses, Fresnel lenses or one or more transparent lenses of other types.
  • the image on the display screen can be brought closer to the position of the user's retina, so that the user's eyes can clearly see the image on the display screen that is almost attached to the front of the user's eyes.
  • the left display screen 203 and the right display screen 204 are used to display images of the real world or virtual world or a combination of virtual and real worlds.
  • the left camera 205 and the right camera 206 are used to collect real-world images.
  • IMU207 is a sensor used to detect and measure acceleration and rotational motion, which can include accelerometers, angular velocity meters (or gyroscopes), etc.
  • the accelerometer is a sensor that is sensitive to axial acceleration and converts it into a usable output signal.
  • the gyroscope is a sensor that is sensitive to motion The sensor of the angular velocity of the body relative to the inertial space.
  • FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device 200 according to an embodiment of the present application.
  • the user's eyes can see the image 210 presented on the display screen of the head-mounted display device 200 .
  • the image 210 presented on the display screen of the head-mounted display device 200 is a visualized virtual environment, which may be a virtual world image, a real world image, or a combination of virtual and real world images.
  • the two images will have positional deviations, so the user There is a difference in the image information obtained by the left and right eyes, that is, parallax.
  • the user's brain will subconsciously calculate the distance between the object and the body according to the parallax, so that the image viewed by the user has a sense of three-dimensionality and depth of field, and guides the user to have a feeling of being in a virtual environment.
  • the feeling in the environment forms a hyper-realistic immersive experience like being there.
  • the provision of a visual virtual environment by an electronic device means that the electronic device renders and displays a virtual image composed of one or more virtual objects, or an image combining real objects and virtual objects by using display technologies such as AR/VR or MR.
  • the virtual object may be generated by the electronic device itself using computer graphics technology, computer simulation technology, etc., or may be generated by other electronic devices using computer graphics technology, computer simulation technology, etc. and sent to the electronic device.
  • Other electronic devices may be servers, and may also be mobile phones, computers, etc. connected or paired with the electronic devices.
  • Virtual objects may also be referred to as virtual images or virtual elements. Virtual objects can be two-dimensional or three-dimensional. Virtual objects are fake objects that do not exist in the physical world.
  • the virtual object may be a virtual object imitating an object existing in a real physical world, thereby bringing an immersive experience to the user.
  • Virtual objects may include virtual animals, virtual characters, virtual trees, virtual buildings, virtual labels, icons, pictures or videos and so on.
  • the corresponding real object refers to an object existing in a real physical environment or a physical space where the user and the electronic device are currently located.
  • Real objects may include animals, people, trees, buildings, and the like.
  • its product form can be an all-in-one machine or a split machine.
  • An all-in-one machine refers to a device with an independent processor, which has independent computing, input and output functions, without the constraints of connections, and has a higher degree of freedom.
  • the split machine means that the display device is separated from the host computer, the display device is mainly used for displaying images, and the host computer is mainly used for calculation processing. Since the host processing system of the split machine is separated from the head-mounted display device, the host can adopt a higher-performance processor and heat dissipation system. Therefore, the advantage of the split machine is that the function allocation between the display device and the host is more reasonable, the processing performance is stronger, and the resources are more abundant. However, the split machine needs to consider cross-device and cross-platform compatibility, such as hardware platform, software system, operating system, application software, etc.
  • the hardware structure of the electronic device 100 provided in the embodiment of the present application is described as an example below.
  • the electronic device 100 provided in the embodiment of the present application may be the aforementioned head-mounted display device 200 , or may be the host 310 or other electronic devices described later.
  • the electronic device 100 may include, but is not limited to, a mobile phone, a desktop computer, a notebook computer, a tablet computer, a smart screen (smart TV), a desktop computer, a laptop computer, a handheld computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC) ), netbooks, and cellular phones, personal digital assistants (PDAs), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, Wearable devices, in-vehicle devices, game consoles, smart home devices, Internet of Things or Internet of Vehicles devices, etc.
  • PDAs personal digital assistants
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • Wearable devices wearable devices
  • in-vehicle devices game consoles
  • smart home devices Internet of Things or Internet of Vehicles devices, etc.
  • FIG. 3 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 is a head-mounted display device 200 as an example for illustration.
  • the electronic device 100 is other devices such as mobile phones, part of the hardware structure may be added or reduced.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, a mobile communication module 150, Wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display device 194, and eye tracking module 195 Wait.
  • the sensor module 180 can be used to acquire the user's posture, and can include a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E, and the like.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 is generally used to control the overall operation of the electronic device 100, and may include one or more processing units, for example: the processor 110 may include a central processing unit (central processing unit, CPU), an application processor (application processor, AP) , modem processor, graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU), controller, memory, video codec , digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM
  • the processor 110 may render different objects based on different frame rates, for example, use a high frame rate for rendering for nearby objects, and use a low frame rate for rendering for distant objects.
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to the touch sensor 180E, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180E through the I2C interface, so that the processor 110 and the touch sensor 180E communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of listening to audio through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of listening to audio through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing audio through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display device 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display device 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display device 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones, computers, VR/AR or MR devices, etc.
  • the USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
  • DP high-speed display port
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display device 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the electronic device 100 may include a wireless communication function.
  • the head-mounted display device 200 may receive rendered images from other electronic devices (such as a VR host or a VR server) for display, or receive unrendered images and then the processor 110 The image is rendered and displayed.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module 150, a wireless communication module 160, a modem processor (not shown), and a baseband processor (not shown).
  • Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the electronic device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antennas can be multiplexed as diversity antennas for wireless LANs. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide the second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation network applied on the electronic device 100. (5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna.
  • At least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display device 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
  • the antenna of the electronic device 100 is coupled to the mobile communication module 150 and the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display device 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display device 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display device 194 is used to display images, videos and the like. Wherein, the display device 194 may be used to present one or more virtual objects, so that the electronic device 100 provides a virtual reality scene for the user. In some embodiments, the electronic device 100 may include 1 or N display devices 194 , where N is a positive integer greater than 1.
  • the manner in which the display device 194 presents the virtual object may include one or more of the following:
  • the display device 194 may include a display screen, and the display screen may include a display panel.
  • the display panel can be used to display physical objects and/or virtual objects, so as to present a three-dimensional virtual environment for users. The user can see the virtual object on the display panel and experience the virtual reality scene.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light emitting diodes quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the display device 194 may include optics for projecting an optical signal (eg, a light beam) directly onto the user's retina.
  • the display device 194 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as reflective mirrors, transmissive mirrors, or optical waveguides, and users can directly use the optical signals projected by the optical device. See virtual objects, feel the three-dimensional virtual environment, realize virtual interactive experience, or realize the interactive experience combining virtual and reality.
  • the optical device may be a pico projector or the like.
  • the number of display devices 194 in the electronic device may be two, corresponding to the two eyes of the user respectively.
  • the content displayed on the two display devices can be displayed independently. Images with parallax can be displayed on the two display devices to improve the stereoscopic effect of the images.
  • the number of display device 194 in the electronic device may also be one, and the user's two eyes watch the same image.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display device 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may include, but not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (dynamic vision sensor, DVS) camera, and the like.
  • camera 193 may be a depth camera.
  • the depth camera can collect the spatial information of the real environment.
  • the camera 193 may capture an image including a real object
  • the processor 110 may fuse the image of the real object captured by the camera 193 with the image of the virtual object, and display the fused image through the display device 194 .
  • the camera 193 may collect hand images or body images of the user, and the processor 110 may be configured to analyze the images collected by the camera 193 to recognize hand motions or body motions input by the user.
  • the camera 193 can be used in conjunction with an infrared device (such as an infrared emitter) to detect the user's eye movements, such as eye gaze direction, blinking operation, gaze operation, etc., thereby realizing eye tracking (eye tracking).
  • an infrared device such as an infrared emitter
  • the electronic device 100 may further include an eye-tracking module 195, which is used to track the movement of the human eye, and then determine the gaze point of the human eye.
  • an eye-tracking module 195 which is used to track the movement of the human eye, and then determine the gaze point of the human eye.
  • the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may be used to store application programs of one or more applications, and the application programs include instructions.
  • the application program When the application program is executed by the processor 110, the electronic device 100 is made to generate content for presentation to the user.
  • the application may include an application for managing the head-mounted display device 200, a game application, a meeting application, a video application, a desktop application or other applications, and the like.
  • the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
  • RAM random access memory
  • NVM non-volatile memory
  • Random access memory has the characteristics of fast read/write speed and volatility. Volatile means that once the power is turned off, the data stored in RAM will disappear. Usually, the static power consumption of the random access memory is extremely low, and the operating power consumption is relatively large.
  • the data in RAM is memory data, which can be read at any time and disappears when the power is turned off.
  • Non-volatile memory has the characteristics of non-volatility and stable data storage.
  • Non-volatile means that the stored data will not disappear after the power is turned off, and the data can be saved for a long time without power.
  • the data in NVM includes application data, which can be stored stably in NVM for a long time.
  • Application data refers to the content written during the running of an application or service process, such as photos or videos obtained by a camera application, text edited by a user in a document application, and so on.
  • Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.
  • SRAM static random-access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous Dynamic random access memory double data rate synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • DDR5SDRAM double data rate synchronous dynamic random access memory
  • the non-volatile memory may include magnetic disk storage, flash memory, and the like.
  • the disk storage device is a memory with a disk as the storage medium, which has the characteristics of large storage capacity, high data transmission rate, and long-term storage of stored data.
  • flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
  • it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc.
  • SLC single-level storage cells
  • MLC multi-level storage cells
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • embedded multimedia memory card embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
  • the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
  • the external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the electronic device 100 may include one or more keys 190 , and these keys 190 may control the electronic device and provide a user with access to functions on the electronic device 100 .
  • the key 190 may be in the form of a mechanical case such as a button, a switch, or a dial, or may be a touch or near-touch sensing device (such as a touch sensor).
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the keys 190 may include a power key, a volume key and the like.
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the electronic device 100 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, notifications and the like.
  • the electronic device 100 may also include other input and output interfaces, and other devices may be connected to the electronic device 100 through a suitable input and output interface.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the electronic device 100 is equipped with one or more sensors, including but not limited to a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E and the like.
  • sensors including but not limited to a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E and the like.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes ie, x, y and z axes
  • the gyro sensor 180B can also be used for navigation, somatosensory game scenes, camera anti-shake and so on.
  • the acceleration sensor 180C can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and can be used in somatosensory game scenes, horizontal and vertical screen switching, pedometers and other applications.
  • the electronic device 100 may track the movement of the user's head according to an acceleration sensor, a gyroscope sensor, and the like.
  • the distance sensor 180D is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180D for distance measurement to achieve fast focusing.
  • the touch sensor 180E is also called “touch device”.
  • the touch sensor 180E is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation may be provided through the display device 194 .
  • the processor 110 may be configured to determine the content that the user sees through the display device of the electronic device 100 according to the sensor data collected by the electronic device 100 .
  • the GPU is used to perform mathematical and geometric operations according to the data obtained from the processor 110 (for example, data provided by the application program), render images using computer graphics technology, computer simulation technology, etc., and determine the image for display on the electronic device 100 image.
  • the GPU may add correction or pre-distortion to the rendering process of the image to compensate or correct the distortion caused by the optical components of the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of this application uses a layered architecture
  • the system is taken as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 4 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the The system is divided into five layers, from top to bottom are applications layer (applications), application framework layer (applications framework), system library (native libraries) and Android runtime ( runtime), hardware abstraction layer (hardware abstract layer, HAL), and kernel layer (kernel).
  • the application layer can consist of a series of application packages.
  • Application packages can include applications such as Camera, Gallery, Games, WLAN, Bluetooth, Music, Video, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include window manager, activity manager, display manager, resource manager, input manager, notification manager ( notification manager), view system (views), etc.
  • a window manager is used to manage window programs.
  • the window manager can be used to draw the size and location area of the window, control the display or hide of the window, manage the display order of multiple windows, obtain the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the activity manager is used to manage the life cycle of the application's activities, such as managing activity processes such as creation, background, and destruction.
  • the display manager is used to manage the display life cycle of the application. It can decide how to control its logical display according to the currently connected physical display device, and send notifications to the system and applications when the state changes, and so on.
  • the input manager is used to monitor and manage input events in a unified manner. For example, when it is detected that the user uses the handheld controller to input, or the sensor detects the user's motion data, the input manager can monitor the call of the system and send the monitored input event For further forwarding or processing.
  • the resource manager provides applications with access to various non-code resources, such as localized strings, icons, images, layout files, video files, and so on.
  • the resource management interface class ResourceImpl can be used as an external resource management interface, through which application resources can be updated.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • the notification display interface may include a view for displaying text and a view for displaying pictures.
  • Runtime includes core library and virtual machine. The runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem, and provides fusion of two-dimensional (2-Dimensional, 2D) and three-dimensional (3-Dimensional, 3D) layers for multiple applications.
  • the surface manager can obtain the SurfaceFlinger service.
  • the SurfaceFlinger service is the core of the graphical user interface (graphical user interface, GUI), and is responsible for mixing and outputting the graphics data of all applications to the buffer stream in order.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing and layer processing, etc.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the hardware abstract layer (hardware abstract layer, HAL) is the interface layer between the operating system kernel and the hardware circuit. Its purpose is to abstract the hardware. It can encapsulate the Linux kernel driver, provide a standard interface upward, and hide Low-level driver implementation details.
  • HAL can include hardware composer (hardware composer, HWC), frame buffer (frame buffer), sensor interface, Bluetooth interface, WiFi interface, audio and video interface, etc.
  • the hardware composer (HWC) is used to buffer the graphics data stream synthesized by SurfaceFlinger for final composite display.
  • the frame buffer (frame buffer) is a graphics buffer stream synthesized by the SurfaceFlinger service.
  • the SurfaceFlinger service draws images by writing content to the frame buffer.
  • the sensor interface can be used to obtain the corresponding data from the sensor driver.
  • the kernel layer is the layer between hardware and software, which is used to provide core system services, such as security, memory management, process management, network protocol stack, driver model, etc.
  • the kernel layer can include display controller drive, sensor driver, camera driver, audio driver, video driver, etc.
  • the driver communicates with the hardware device through the bus, controls the hardware to enter various working states, obtains the value of the device-related registers, and obtains the state of the device. For example, the driver can obtain user operation events, such as sensor input, camera input, etc., and convert the events into data.
  • the software and the hardware can work together, and when the head-mounted display device 200 detects the movement of the user, a corresponding image is generated for display.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the sensor driver at the kernel layer obtains the sensor input data.
  • the input manager at the application framework layer gets sensor input events from the kernel layer.
  • the window manager may then need to update the size or position of the application window, and the activity manager updates the display activity's lifecycle. After the window manager has adjusted the size and position of the window and the active display interface, the display manager will refresh the image in the window area.
  • the surface manager will use the SurfaceFlinger service to mix the graphics data rendered by the GPU in order, generate a graphics buffer stream, and output it to the frame buffer.
  • the frame buffer then sends the graphics buffer stream to the hardware compositor of the hardware abstraction layer, and the hardware compositor performs final compositing on the graphics buffer stream.
  • the hardware compositor sends the final graphic data to the display driver of the kernel layer, and is displayed by the display device 194 .
  • the software architecture shown in this embodiment does not constitute a specific limitation to the present application.
  • the software architecture of the electronic device 100 may include more or fewer modules than shown in the figure, or combine some modules, or split some modules, or arrange different architectures.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • Fig. 5 shows a communication system 30, which may be a split system in some embodiments.
  • the communication system 30 may include multiple intelligent terminal devices, and communication connections are established between the multiple terminal devices.
  • the communication system 30 includes a head-mounted display device 200 and a host 310.
  • the host 310 can be, for example, a device with strong image processing capabilities such as a computer A or a mobile phone A, and is not limited to that shown in FIG. 3
  • the host 310 can be one or more devices with strong image processing performance, such as a server, and the host 310 can also be one or more cloud devices, such as cloud hosts/cloud servers. There is no limit to this.
  • a first connection is established between the head-mounted display device 200 and the host 310 .
  • the head-mounted display device 200 is mainly used for displaying images, and the host 310 mainly provides image calculation and processing functions, and then sends them to the head-mounted display device 200 for display through a first connection.
  • the first connection may be a wired connection or a wireless connection.
  • a terminal device may also be referred to simply as a terminal, and a terminal device is generally an intelligent electronic device that can provide a user interface, interact with a user, and provide a service function for the user.
  • Each terminal device in the communication system 30 can carry system, system, system, system (HarmonyOS, HOS) or other types of operating systems, the operating systems of each terminal device in the communication system 30 may be the same or different, which is not limited in this application.
  • multiple terminals in the communication system 30 are equipped with system, then the system composed of multiple terminals can be called Super virtual device (super virtual device), also known as
  • a hyper terminal refers to integrating the capabilities of multiple terminals through distributed technology, storing them in a virtual hardware resource pool, and uniformly managing, scheduling, and integrating terminal capabilities according to business needs to provide external services, so that different terminals Realize fast connection, mutual assistance, and resource sharing.
  • the first connection may include a wired connection, such as a high definition multimedia interface (high definition multimedia interface, HDMI) connection, a display port (DP) connection, etc., and the first connection may also include a wireless connection, such as a bluetooth (BT) connection , wireless fidelity (wireless fidelity, Wi-Fi) connection, hotspot connection, etc., to realize communication between the head-mounted display device 200 and the host 310 under the same account, no account or different account.
  • the first connection can also be an Internet connection.
  • the head-mounted display device 200 and the host 310 can log in the same account, so as to realize connection and communication through the Internet.
  • multiple terminals can also log in to different accounts, but they are connected through binding.
  • the head-mounted display device 200 and the host 310 can log in to different accounts, and the host 310 sets to bind the head-mounted display device 200 with itself in the device management application, and then connects through the device management application.
  • the embodiment of the present application does not limit the type of the first connection, and the terminals in the communication system 30 can perform data transmission and interaction through various types of communication connections.
  • terminals may also be connected and communicate in combination with any of the above methods, which is not limited in this embodiment of the present application.
  • each terminal device may be configured with a mobile communication module and a wireless communication module for communication.
  • the mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal.
  • the wireless communication module may include a Bluetooth (bluetooth, BT) module and/or a wireless local area network (wireless local area networks, WLAN) module and the like.
  • the Bluetooth module can provide one or more Bluetooth communication solutions including classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (Bluetooth low energy, BLE), and the WLAN module can provide wireless fidelity point-to-point connections (wireless fidelity peer-to-peer, Wi-Fi P2P), wireless fidelity local area networks (wireless fidelity local area networks, Wi-Fi LAN) or wireless fidelity software access point (wireless fidelity software access point, Wi-Fi softAP) One or more WLAN communication solutions.
  • Wi-Fi P2P refers to allowing devices in a wireless network to connect to each other in a point-to-point manner without going through a wireless router. In the system, it may also be called wireless fidelity direct (Wi-Fi direct).
  • Wi-Fi P2P Devices that establish a Wi-Fi P2P connection can exchange data directly through Wi-Fi (must be in the same frequency band) without connecting to a network or hotspot, and realize point-to-point communication, such as transferring files, pictures, videos and other data .
  • Wi-Fi P2P has the advantages of faster search speed and transmission speed, and longer transmission distance.
  • the communication system 30 shown in FIG. 3 is only used to assist in describing the technical solution provided by the embodiment of the present application, and does not limit the embodiment of the present application.
  • the communication system 30 may include more or fewer terminal devices, such as handheld controllers, etc.
  • This application does not make any limitations on terminal types, terminal quantities, connection methods, etc.
  • the communication system 30 includes a PC and a mobile phone.
  • MTP delay refers to the sum of the delay time generated in a cycle from the user's head movement to the optical signal of the new image correspondingly displayed on the display screen of the head-mounted display device is mapped to the human eye, including the sensor detecting the user's motion 1.
  • the central processing unit central processing unit, CPU
  • runs the application and the graphics processing unit (graphics processing unit, GPU) calculates the presented image, renders and sends a series of time.
  • the sensor is used to detect user movement information and/or external environment information, so as to ensure that the viewing angle of the screen can be switched at any time while tracking body movements.
  • the sensors such as cameras, gyroscopes, accelerometers, magnetometers, etc.
  • the handheld somatosensory controller can sensitively capture the user's movement and generate sensor data in real time, such as image data, Acceleration data, angular velocity data, etc.
  • the processor can take the sensor data generated by the sensor in the previous step as input, and obtain the user's real-time pose data through data fusion and calculation.
  • Pose refers to the position and orientation of a person or object, including posture and orientation.
  • the pose may be the head pose of the user.
  • the pose can be obtained through sensors and/or cameras in the head-mounted display device.
  • the output pose data includes quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions); If the head-mounted display device is a 6dof device, the output pose data includes the quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions) and the three-axis position (up-down, front-back, left-right movement) data.
  • the rendering module performs anti-distortion and other processing on the image, and then sends it to a processor such as GPU for drawing, coordinate transformation, view transformation, layer rendering, texture synthesis, coloring, Cropping, rasterization and other rendering processes. If it is a VR wearable device, two images corresponding to the left and right eyes can be rendered respectively.
  • image rendering includes rendering images such as color and transparency, and also includes rendering images that are rotated and/or translated based on human body pose data detected by VR wearable devices.
  • the posture detected by the VR wearable device includes multiple degrees of freedom such as rotation angle and/or translation distance, where the rotation angle includes yaw angle, pitch angle, and roll angle, and the translation distance includes relative to the three-axis direction (X axis, Y axis, Z axis) translation distance. Therefore, image rendering includes performing rotation processing on the image according to the rotation angle of the VR wearable device, and/or performing translation processing on the image according to the translation distance of the VR wearable device.
  • the rendered image objects such as mountains, water, etc.
  • mountains, water, etc. For example, mountains, water, etc.
  • the image seen by the user is linked with the user's movement and user's perspective, and the experience is better.
  • the image sending and display module is used to send the image data (multiple pixel data) rendered by processors such as GPU to the display device.
  • the display device includes a frame buffer.
  • the frame buffer can also be called a video memory, and is used to store image rendering data processed by the GPU or to be extracted soon.
  • Sending the rendered data to the display device by the electronic device may refer to sending to the frame buffer in the display device.
  • the display device may not have a frame buffer, and sending the rendered data to the display device by the electronic device may refer to directly sending image data (multiple pixel data) to the display device for display.
  • the video controller After the GPU submits the image rendering data to the frame buffer, in conjunction with the vertical sync (Vsync) signal of the screen refresh, the video controller extracts the image data of the frame buffer at a specified time point after the Vsync signal, and then the display screen, etc. After the optical display device receives the display signal, it extracts the buffer frame and displays it on the screen.
  • Vsync vertical sync
  • the display screen When the display screen is used to display screen images, it scans the screen from left to right and from top to bottom to display pixels sequentially. After one line is scanned, a horizontal sync (horizontal sync, Hsync) signal is sent, and one page is scanned. Just display a frame of picture and send out a Vsync signal, and start the scanning of the next page.
  • a horizontal sync horizontal sync, Hsync
  • the Vsync signal is a pulse signal, which is generally generated by a hardware clock, and can also be simulated by software (such as a hardware synthesizer HWC).
  • the electronic device will wait for the Vsync signal to be sent, and then render a new frame of image and update the frame buffer to solve the phenomenon of tearing the display screen and increase the smoothness of the screen. Spend. If the data of the current frame buffer is not completely updated and the data of the previous frame is still retained, then when the screen is refreshed, the frames captured from the frame buffer come from different frames, resulting in a sense of screen tearing.
  • Common display screen types include: liquid crystal display (liquid crystal display, LCD), organic light emitting diode (organic light emitting diode, OLED), plasma display panel (plasma display panel, PDP), etc. Wait.
  • liquid crystal display liquid crystal display
  • OLED organic light emitting diode
  • plasma display panel plasma display panel
  • Wait for example, for an LCD display, the arrangement of liquid crystal molecules is changed by changing the electric field (voltage), so that the light transmittance of the external light source (incident light beam) through the liquid crystal is changed, and then by adjusting the color filter ( Red, green, blue three primary color filter film), to complete the color display.
  • an organic light-emitting layer is sandwiched between the positive and negative electrodes.
  • the holes generated by the anode and the electrons generated by the cathode move and are injected into the hole transport layer and the electron transport layer respectively.
  • Migrating to the light-emitting layer when the holes and electrons meet in the light-emitting layer, energy excitons are generated, thereby exciting the light-emitting molecules to finally generate visible light, and then color filters are added to complete the color display.
  • the display principles and hardware structures of different types of display screens are not the same, and will not be repeated here. Different types of display screens do not limit this embodiment of the application.
  • the drawing of an image generally requires the cooperation of the CPU and the GPU.
  • the CPU can be responsible for calculating image-related display content, such as view creation, layout calculation, image decoding, text drawing, pose data calculation, texture data calculation, etc., and then the CPU will pass the calculated content to the GPU. Perform transformations, compositing, rendering, and more.
  • GPU can be used to be responsible for layer rendering, texture synthesis, shader rendering, etc., which can include: processing vertex data by vertex shader, such as translation, scaling, rotation, various model transformation, viewing transformation, projection transformation and other operations, will It is converted into standardized coordinates; the shape of the object is depicted by the subdivision shader and the geometry shader, and the geometry of the model is processed to make it smoother; the processed data is assembled into primitives to make the data into primitives; according to the viewport Cut the primitives and cut out the invisible parts of the viewport; rasterize the primitives to generate the coordinates of the display screen and pixel fragments; the fragment shader colors the pixel fragments; color the fragments After processor processing, texture mixing and other rendering processes are performed, and finally the pixel data of the image is obtained.
  • vertex shader such as translation, scaling, rotation, various model transformation, viewing transformation, projection transformation and other operations
  • the GPU will send the pixel data of the image to the frame buffer.
  • the video controller extracts the pixel data of the image frame in the frame buffer at a specified time, and then displays the frame image on the display screen.
  • each frame of image needs to cooperate with the Vsync signal.
  • Vsync cycle a complete frame of image needs to be rendered and sent to the frame buffer, otherwise the display screen will freeze and drop frames. , tearing, etc.
  • the AR/VR or MR device when it has a split structure, such as in the communication system 30, it may also include a data transmission module, which can be used to send or receive image-related data through a wired connection or a wireless connection .
  • a data transmission module which can be used to send or receive image-related data through a wired connection or a wireless connection .
  • the sensor module captures the movement of the user's head and generates sensor data, which is sent to the host 310 by the data transmission module, and the motion tracking algorithm module in the host 310 After the sensor data is processed, data such as pose and pose are generated, and then sent to the head-mounted display device 200 through the data transmission module.
  • the image rendering module in the head-mounted display device 200 can render the data such as pose and pose, and then render the rendered data sent to the frame buffer and finally displayed on the display.
  • the rendering module can be located on the host 310, when the user wears the head-mounted display device 200 and the head moves, the sensor module captures the movement of the user's head, generates sensor data, and sends it to the host 310 by the data transmission module.
  • the motion tracking algorithm module in 310 processes the sensor data to generate pose and other data, and the image rendering module in the host 310 can render the pose data, etc., and then send the rendered data to the headset through the data transmission module.
  • the rendered image data is finally extracted by the display sending module and displayed on the display screen.
  • the human-computer interaction performance can be improved, thereby reducing the user's dizziness, vomiting and other adverse reactions, and improving the user experience.
  • each link can be optimized to reduce the time spent on each link, such as increasing the frame rate of image acquisition, providing predicted poses, improving the rendering performance of the GPU, and turning the display
  • the refresh rate is increased to above 75Hz and so on.
  • the MTP delay can be decomposed into various links.
  • Figure 8 shows the time-consuming analysis of each module. As shown in Figure 8:
  • the sensor detection module is used to detect user movement information and/or external environment information.
  • the sensor is a camera and IMU as an example for illustration.
  • the frequency of image generation is 30Hz. Assuming that the exposure time of a frame of image is 10ms, and the time stamp of the image is in the middle of the image exposure, there is a 5ms delay between the time stamp of the image and its final imaging.
  • the sensor IMU its frequency is 1000Hz, and it can be considered that the generation of the IMU has almost no delay.
  • due to the large amount of image data there is also a certain delay between image generation and transmission to the processor. Assuming it is 5ms, the total time from sensor data generation to transmission completion needs 10ms.
  • Motion tracking algorithm module The processor can calculate the user's pose data based on the sensor data. Assuming that this step takes 5ms to complete the calculation, of course, in this module, we can reduce the time-consuming calculation through some optimization methods, and even reduce the overall MTP delay by predicting the pose, but only consider real-time for the time being The pose calculation of , that is, in this module, there is a delay of 5ms.
  • Image sending and display module the image sending and display module is used to send the rendered image frame data to the display device. Due to the large amount of image data finally sent to the display, it is assumed that it takes 8ms for the transmission to be sent to the display.
  • the total time spent on performing the above four steps in series is 31 ms, that is, the MTP delay from the user's head movement to the final display of the image is 31 ms.
  • Table 1 shows the time-consuming analysis of each step of the MTP delay in the corresponding AR scenario.
  • the generation of the above sensor data and the calculation of the pose that is, the work done by the sensor detection module and the motion tracking algorithm module are collectively referred to as the stage of image acquisition.
  • image rendering and image display can be performed.
  • the previous figure 8 can be used .
  • the steps described in Table 1 are simplified into three stages: image acquisition, image rendering, and image display, respectively corresponding to 15ms, 8ms, and 8ms time-consuming, as shown in FIG. 9 .
  • the data transmission between the host and the head-mounted display device will also take a certain amount of time.
  • the time-consuming data transmission can also be Included in the time spent in the image acquisition phase.
  • the process from image acquisition, image rendering to image display is implemented serially, and the minimum image unit of each link is a whole image, and there is redundancy in time, which can be further optimized.
  • the present application provides an image transmission and display method, which can divide a whole set of images into multiple parts, and perform parallel image rendering and/or parallel image display processing on these multiple parts , and/or transmit the multiple partial images in parallel between multiple devices, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further reducing the delay time of image transmission and display.
  • an image transmission and display method which can divide a whole set of images into multiple parts, and perform parallel image rendering and/or parallel image display processing on these multiple parts , and/or transmit the multiple partial images in parallel between multiple devices, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further reducing the delay time of image transmission and display.
  • the implementation of the method provided in this application can reduce the delay time in the image transmission and display process, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
  • the present application also provides related electronic equipment or systems, which can apply the image transmission and display method provided in the present application.
  • the electronic device may include the aforementioned head-mounted display device, and the head-mounted display device may realize display effects such as AR, VR, and MR.
  • the electronic device involved in the embodiment of the present application may also be other devices including display screens, such as terminal devices such as mobile phones, PADs, PCs, smart TVs, and vehicle-mounted devices.
  • This application does not impose any restrictions on the specific types of electronic devices.
  • the communication system 30 in the foregoing example does not impose any limitation on other embodiments of the present application.
  • the various embodiments provided by this application are mainly introduced by taking the electronic device as a head-mounted display device in a VR scene as an example, but the image transmission and display methods provided by the embodiments of this application are not limited to be applied to head-mounted display devices Devices and VR scenes, more generally, the transmission and display images of various types of electronic devices (such as mobile phones, PCs, PADs, vehicle-mounted devices, game consoles, smart wearable devices, smart home devices, Internet of Things devices, etc.) can be universally applicable to this application Provided image transmission and display methods.
  • various types of electronic devices such as mobile phones, PCs, PADs, vehicle-mounted devices, game consoles, smart wearable devices, smart home devices, Internet of Things devices, etc.
  • the image transmission and display method provided by the embodiment of this application can be applied to various business scenarios, including but not limited to:
  • the user can wear a head-mounted display device, and the head-mounted display device can use VR, AR, MR and other technologies to display images, so that users can feel the virtual environment and provide users with VR/AR/MR experience.
  • the MTP delay is relatively large, which may easily cause adverse reactions such as dizziness and vomiting of users.
  • VR/AR/MR images can be displayed faster, and the MTP delay is reduced, effectively alleviating the user's dizziness, vomiting and other adverse reactions of motion sickness, and improving the user experience.
  • the PC or large screen can more quickly transfer the The mobile phone interface is displayed, giving users a smoother screencasting experience.
  • a smart car display can generate new images based on feedback from user operations, on-board sensors, or other input information.
  • the car display can display corresponding images more quickly. images, giving users a faster and more comfortable driving experience.
  • the scenarios described above are only examples for illustration, and do not constitute any limitation to other embodiments of the present application. Not limited to the above scenarios, the image transmission and display method provided by the embodiment of the present application can be applied to other social scenarios, office scenarios, shopping scenarios and other scenarios that require image display.
  • the image transmission and display technology solution provided by this application is to optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing the waiting time in the process of image transmission and display Time, further reducing the delay time of image transmission and display.
  • it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience.
  • Fig. 10 illustrates the process of parallel rendering and displaying provided by some embodiments.
  • the head-mounted electronic device 200 After acquiring the sensor data and calculating the pose data by the processor, the head-mounted electronic device 200 acquires the image data of a whole set of images for rendering. As shown in FIG. 11 , it is assumed that the whole image of the first image is horizontally divided into four parts from top to bottom, which are respectively the first Slice1 , the second Slice2 , the third Slice3 , and the fourth Slice4 .
  • a whole set of images is divided into Slice1, Slice2, Slice3, and Slice4 on average, wherein, the rendering time of a whole set of images is 8ms, and the time for displaying is 8ms, then the average divided Slice1, Slice2, Slice3, The rendering time of each image in Slice4 is 2ms, and the displaying time is 2ms.
  • the GPU and other processors first render Slice1, which takes 2ms.
  • the rendered image pixel data is sent to the display device (such as a display device such as a display screen), which can also be referred to as sending display, and takes 2ms.
  • the display device includes a frame buffer, so sending for display may refer to sending the rendered image pixel data to the frame buffer; while Slice1 is rendered and sent for display, processors such as GPU render Slice2, which takes 2ms, and then send to display. It takes 2ms; while Slice2 is rendered and sent to the display, the GPU and other processors render Slice3, which takes 2ms, and then sent to the display, which takes 2ms; after Slice3 is rendered and sent to the display, the GPU and other processors render Slice4. It takes 2ms, and then it is sent to the display, which takes 2ms.
  • the display screen includes a frame buffer, and after sending and displaying Slice1, Slice2, Slice3, and Slice4, the display screen extracts the complete data of the first image from the frame buffer for display in conjunction with the Vsync signal.
  • the display screen extracts the pixel data of the first image from the frame buffer and completely displays the first image.
  • the display screen does not have a frame buffer, as shown in Figure 12, when Slice1 is sent and displayed, the display screen directly displays the image of Slice1; when Slice2 is sent and displayed, the display screen displays the images of Slice1 and Slice2 ; After sending and displaying Slice3, the display screen displays the images of Slice1, Slice2 and Slice3; when sending and displaying Slice4, the display screen displays the images of Slice1, Slice2, Slice3, and Slice4, that is, the first image is completely displayed.
  • This embodiment does not impose any restrictions on the way of dividing each image, which can be divided into any number such as four or six, can be divided horizontally, vertically or in any direction, and can be divided evenly or unevenly. Usually, dividing the image evenly can make full use of the parallel processing capability of each module and save more time than dividing the image unevenly.
  • the size of each slice is basically the same or the same; when divided unevenly, the size of each slice can be different.
  • the rendering time of an entire image is 8ms
  • the displaying time is 8ms.
  • the image is divided into the fifth slice Slice5, the sixth slice Slice6, and the sixth slice in a ratio of 2:3:1:2.
  • Seven slices of Slice7 and eighth slice of Slice8 the rendering time of each image is 2ms, 3ms, 1ms, 2ms respectively, and the time of sending to display is 2ms, 3ms, 1ms, 2ms respectively.
  • GPU and other processors first render Slice5, which takes 2ms. After rendering Slice5, it takes 2ms to send it to the display.
  • the GPU and other processors render Slice6, which takes 3ms. , it takes 3ms; while Slice6 is finished rendering, GPU and other processors render Slice7, which takes 1ms. It takes 2ms for GPU and other processors to render Slice8. After Slice7 is sent to display, it takes 2ms to send Slice8 to display.
  • the specific way to divide the image can be customized by the developer according to the actual situation.
  • the best division scheme is when the efficiency of the GPU is maximized, that is, when the power consumption and performance are in a better balance, the total time spent on image rendering plus sending to display Shorter image division scheme.
  • the VR glasses will render and display the image after acquiring the image.
  • Image distortion is a phenomenon in which normal images are distorted due to the inherent characteristics of optical lenses (such as convex lenses) in VR glasses. The reason for this is that light rays are more curved away from the center of the lens than closer to the center of the lens. Distortion is distributed along the radius of the lens, including barrel distortion and pincushion distortion.
  • Image pre-distortion is to counteract the image distortion effect of the optical lens in the VR glasses, and perform reverse distortion preprocessing on the normal image in advance, so that the image that the user finally sees is a normal image.
  • the image distortion of the optical lens in VR glasses can form a normal image into a pincushion distortion effect, so in the image rendering stage, image pre-distortion processing can be performed in advance to turn the normal image into a barrel distortion. Then, after the barrel-shaped distorted image is changed by the pincushion-shaped distortion effect of the optical lens in the VR glasses, the image displayed in the user's eyes is a normal image.
  • Time warp is a technology of image frame correction.
  • the rendering of the scene is delayed because the head moves too fast, that is, the user's head has been turned, but the image has not been rendered, or the image of the previous frame is rendered. If the rendering time is too long, a frame will be lost and the result will be a jittery image.
  • Time Warp Alleviates rendering lag by warping an image before it is sent to the display. The most basic time warp is based on the direction of the warp, which corrects the image shake caused by the rotation and posture of the head, and it can generate a new image frame with less computing resources.
  • Time warping can generate an image to replace the frame that has not been rendered when the image rendering frame is not synchronized with the head movement, that is, automatically fill the image frame, so that the front and rear image frames transition smoothly.
  • each Slice is a 1 ⁇ 4 grid, as shown in (b) in Figure 14.
  • Each Slice needs to be pre-distorted. For each image vertex in the Slice, the pixel position after distortion corresponding to the vertex can be calculated through the pixel position before distortion in the original image and the distortion formula.
  • Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to calculate the distorted pixel position, and finally generate the pre-distortion effect shown in (b) in Figure 14.
  • interpolation methods such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.
  • time warping is also performed on the image.
  • the collected images are rotated accordingly to obtain a new image frame as the final display image, as shown in (c) in Figure 14 after pre-distortion and image time warping.
  • the predicted pixel position corresponding to the vertex can be obtained through calculation.
  • Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to generate predicted pixel positions.
  • each image After each image is rendered in the GPU, it will be sent to the display device, referred to as sending to the display.
  • the hardware Vsync signal is a pulse signal sent after the display refreshes a whole frame of images. If the fragmented image rendering and display process does not match the Vsync signal, the display may only refresh a part of the image, and the unrefreshed part displays the part of the previous frame of image, resulting in screen tearing.
  • SurfaceFlinger can synthesize the image data in the buffer and then send it to the display device, or a hardware composer (hardware composer, HWC) can use hardware to complete the synthesis of the image data and send it to the display.
  • the Vsync signal matches the refresh rate of the screen. When a Vsync signal arrives, the screen starts to refresh pixels from top to bottom and from left to right. What's more, the refreshing of each row can also cooperate with a horizontal sync (horizontal sync, Hsync) signal, and the pixels of each row can cooperate with a pixel clock (pixel clock, PCLK) signal for transmission.
  • a horizontal sync horizontal sync
  • Hsync horizontal sync
  • PCLK pixel clock
  • each pixel data will be written from left to right and from top to bottom, and a screen refresh will be completed within one Vsync cycle.
  • the electronic device can control the display time, and obtain the pixel at a fixed position in the cache at a fixed time point within a cycle to display it. For example, if a time point is specified, the electronic device believes that the GPU can complete the rendering of the image before this time point, then when the time point arrives, the HWC of the electronic device will send the pixels of the image to the fixed position of the display screen in order. show.
  • black insertion technology is also involved in the display process.
  • the black insertion technology is to prevent the smear phenomenon caused by the visual retention of the human eye.
  • the electronic device synchronously completes the rendering, displaying and displaying of a whole frame of images.
  • image rendering may be performed in advance in order to complete the display sending in time before the display signal arrives.
  • the electronic device can adjust the brightness of the entire display screen to zero to implement black insertion, and the time is about 80% of a Vsync period. For example, if the refresh rate of the display screen is 90Hz and its Vsync period is 11.1ms, then the black insertion time is about 8.9ms.
  • the electronic device When the black is inserted and the display is turned on, the electronic device needs to ensure that all image frames to be displayed have been refreshed on the screen, so that the user can see a new and complete image.
  • the electronic device needs to render and send all the sliced images (that is, Slice1 , Slice2 , Slice3 , and Slice4 ) to display.
  • Slice1 is sent to display; while Slice1 is sent to display, Slice2 is rendered.
  • Slice2 is sent to display; while Slice2 is sent to display, Slice3 is rendered.
  • Slice3 is sent to display; while Slice3 is sent to display, Slice4 is rendered.
  • the black insertion is completed, the display screen lights up, and the display screen displays a complete pair of images composed of Slice1, Slice2, Slice3, and Slice4.
  • the time periods of image rendering and image sending and display are both shorter than one Vsync period.
  • turning off the display of the display may include any of the following situations: 1 Turn off the backlight power supply of the display. 2 Turn off the backlight power supply of the display screen and the power supply of the display panel. 3Turn off the display backlight power supply, display panel, screen driver integrated circuit (integrated circuit, IC) and backlight driver IC.
  • the processor When controlling the backlight driver IC to turn off the backlight power supply, the processor still sends display data to the display panel through the screen driver IC and the backlight driver IC, but because the backlight is turned off, the display screen cannot display images. Since the display data is always sent to the display panel, the restoration of backlight power supply is fast.
  • the display panel cannot receive the display data sent by the screen driver IC, and the initial configuration data of the display panel is lost.
  • the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Therefore, the speed of restoring the display on the display screen is relatively slow, which can save the power consumption of the display panel.
  • the response speed of the display screen to restore display is not high, the display panel can be turned off to further save power consumption.
  • the backlight driver IC cannot receive the backlight data sent by the processor.
  • the backlight driver IC also cannot receive the color data sent by the screen driver IC.
  • the power supply of the backlight driver IC is restored again, it is similarly necessary to initialize the configuration of the backlight driver IC.
  • the screen driver IC cannot receive the display data sent by the processor, nor can it send color data to the backlight driver IC.
  • initial configuration of the screen driver IC is also required. Therefore, the speed of restoring the display display is slow.
  • turning off the display screen display may include any of the following situations: 1 Turning off the power supply of the OLED display panel. 2 Turn off the OLED display panel power supply and screen driver IC power supply.
  • the processor when the processor controls to turn off the power supply of the OLED display panel, the processor still sends display data to the screen driver IC, but since the power supply of the OLED display panel is turned off, the OLED display cannot display images.
  • the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Since the display data is always sent to the OLED display panel, the restoration of the power supply of the OLED display panel is fast.
  • the screen driver IC When the screen driver IC is controlled to be turned off, the screen driver IC cannot receive display data sent by the processor, nor can it send display data to the OLED display panel. Similarly, when the power supply of the screen driver IC is restored, initial configuration of the screen driver IC is also required. It is slow to restore the power supply of the screen driver IC.
  • the processor may control the power supply of pixels in a part of the OLED display panel to be turned off. Then the image cannot be displayed in this part of the area. It can be realized to turn off the display of some areas on the display screen.
  • the image displayed on the display screen can become a virtual image in the user's eyes.
  • the focal point corresponding to the virtual image can be set within a certain distance from the front of the user's eyes, such as 2 meters or 4 meters, through the optical design of the display screen. The distance may also be a distance interval, such as 2-4 meters. Then the image displayed on the display screen appears to the user's eyeballs as being imaged on the fixed focal point in front of the user's eyeballs.
  • the foregoing embodiments may be applied to a case where an electronic device independently displays an image, and may also be applied to a case where multiple devices cooperate to display an image.
  • FIG. 16 shows a communication system 40 based on wireless transmission.
  • the communication system 40 includes a computer A and a head-mounted display device 200 , and data is transmitted between the computer A and the head-mounted display device 200 through a wireless connection 41 .
  • the image rendering can be completed by the computer A, or by the head-mounted display device 200 .
  • the image rendering portion is performed by computer A.
  • the head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then computer A can divide a whole frame of image into slices, render each slice of image in parallel, encode each slice of image after rendering, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives each piece of image data from the computer A through the wireless connection 41 , each piece of image data is decoded and displayed in parallel, and finally displayed.
  • processes from generating sensor data, pose calculation, etc. to obtaining a frame of image to be rendered are synthesized into image acquisition.
  • the computer A After the computer A obtains the entire frame of the first image to be rendered, it can divide the entire frame of the first image into four parts, namely Slice1, Slice2, Slice3, and Slice4.
  • Computer A renders Slice1 first, and then encodes Slice1 after rendering. After the rendering of Slice1 is completed, while encoding Slice1, computer A renders Slice2, and encodes Slice2 after rendering. Computer A encodes Slice1 and then transmits it to the head-mounted display device 200 through the wireless connection 41 . After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . After the rendering of Slice2 is completed, while encoding Slice2, computer A renders Slice3, and encodes Slice3 after rendering. By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 .
  • Computer A After the rendering of Slice3 is completed, while encoding Slice3, computer A renders Slice4, and encodes Slice4 after rendering. After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
  • the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 sends and displays Slice1. After decoding Slice1, while sending and displaying Slice1, the head-mounted display device 200 decodes the received Slice2.
  • the head-mounted display device 200 decodes the received Slice3.
  • Slice3 is decoded, Slice3 is sent for display, and at the same time as Slice3 is sent for display, the head-mounted display device 200 decodes the received Slice4.
  • Slice4 is decoded, Slice4 is sent for display.
  • the head-mounted display device 200 can display the complete first image on the display screen.
  • the image rendering part is completed by the head-mounted display device 200 .
  • the head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then the computer A can divide a whole frame of images into slices, encode each slice of images, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives various images from the computer A through the wireless connection 41 , it decodes, renders, and displays each image in parallel, and finally displays them.
  • the processes from generating sensor data, pose calculation, etc. to obtaining a frame of image that has not yet been rendered are synthesized into image acquisition.
  • the computer A After the computer A acquires the whole frame of the first image, it can divide the whole frame of the first image into four parts, namely Slice1, Slice2, Slice3 and Slice4.
  • the computer A encodes Slice1 first, and then transmits the encoding to the head-mounted display device 200 through the wireless connection 41 after the encoding is completed. After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 . After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
  • the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 renders and displays Slice1. After decoding Slice1, while rendering Slice1, the head-mounted display device 200 decodes the received Slice2.
  • the head-mounted display device 200 decodes the received Slice3.
  • the head-mounted display device 200 decodes the received Slice3.
  • Slice3 is rendered and displayed.
  • the head-mounted display device 200 decodes the received Slice4.
  • Slice4 is decoded, Slice4 is rendered and displayed.
  • the head-mounted display device 200 can display the complete first image on the display screen.
  • the image codec may adopt technologies such as H.265, aiming at compressing image size and speeding up image transmission.
  • the encoding step, transmission step, and decoding step can also be synthesized into a transmission step.
  • both the sending end (computer A) and the receiving end (head-mounted display device 200) perform parallel processing on each stage (encoding, transmission, decoding, rendering, and display) of the sliced image, which greatly shortens the processing time.
  • the delay time in the image transmission and display process the image can be displayed faster.
  • the sending end and receiving end shown in the communication system 40 may also be any other types of electronic devices.
  • the connection type between the sending end and the receiving end is not limited to wireless connection, and may also be wired connection or other connections.
  • the content transmitted between the sending end and the receiving end is not limited to pictures, but can also be files, videos, etc.
  • the way of dividing an image is not limited to this example, either. Other scenarios based on the same scheme are within the protection scope of this application.
  • a communication system composed of a first device and a second device is taken as an example for description.
  • the first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images.
  • image rendering is completed by the first device.
  • the embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario.
  • the first device is the aforementioned computer A
  • the second device is the aforementioned head-mounted display device 200
  • the first device and the second device form a communication system 40
  • the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating systems carried by the first device and the second device.
  • the software system of the first device or the second device includes but is not limited to or other operating systems.
  • Fig. 19 is a flow chart of the image transmission and display method provided by Embodiment 1, which specifically includes the following steps:
  • a first device establishes a first connection with a second device.
  • the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi
  • the connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account.
  • the embodiment of the present application does not limit the type of the first connection.
  • the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
  • the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
  • the data transmitted in this embodiment is data corresponding to multiple frames of images.
  • this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device.
  • images, video, audio, text, etc. can also be transmitted.
  • the second device obtains the third image, and divides the third image into a first image and a second image.
  • the third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself.
  • the acquisition of the third image in this embodiment The source and acquisition process are not limited.
  • the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the second image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the second device encodes the first image.
  • codec can be performed before and after image transmission.
  • the first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
  • the second device sends the first image to the first device.
  • the second device After the second device finishes encoding the first image, it sends the first image to the first device.
  • the second device encodes the second image.
  • the second device After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S104 and step S105 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
  • the second device sends the second image to the first device.
  • the second device After the second device finishes encoding the second image, it sends the second image to the first device.
  • the first device decodes the first image.
  • the first device After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S106 and step S107 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
  • the first device renders the first image.
  • the first device After decoding the first image, the first device renders the first image.
  • the first device decodes the second image.
  • Step S108 and step S109 occur simultaneously, that is, the rendering of the first image by the first device and the decoding of the second image by the first device are processed in parallel.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device renders the second image.
  • the first device After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S110 and step S111 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
  • a communication system composed of a first device and a second device is taken as an example for description.
  • the first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images.
  • image rendering is completed by the second device.
  • the embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario.
  • the first device is the aforementioned computer A
  • the second device is the aforementioned head-mounted display device 200
  • the first device and the second device form a communication system 40
  • the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating systems carried by the first device and the second device.
  • the software system of the first device or the second device includes but is not limited to or other operating systems.
  • Fig. 20 is a flow chart of the image transmission and display method provided by Embodiment 2, which specifically includes the following steps:
  • the first device establishes a first connection with the second device.
  • the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi
  • the connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account.
  • the embodiment of the present application does not limit the type of the first connection.
  • the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
  • the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
  • the data transmitted in this embodiment is data corresponding to multiple frames of images.
  • this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device.
  • images, video, audio, text, etc. can also be transmitted.
  • the second device obtains the third image, and divides the third image into a first image and a second image.
  • the third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself.
  • the acquisition of the third image in this embodiment The source and acquisition process are not limited.
  • the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the second image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the second device renders the first image.
  • the second device encodes the first image.
  • codec can be performed before and after image transmission.
  • the first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
  • the second device renders the second image.
  • Step S204 and step S205 occur simultaneously, that is, the encoding of the first image by the second device and the rendering of the second image by the second device are processed in parallel.
  • the second device sends the first image to the first device.
  • the second device After the second device finishes encoding the first image, it sends the first image to the first device.
  • the second device encodes the second image.
  • the second device After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S206 and step S207 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
  • the second device sends the second image to the first device.
  • the second device After the second device finishes encoding the second image, it sends the second image to the first device.
  • the first device decodes the first image.
  • the first device After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S208 and step S209 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device decodes the second image.
  • the first device After the first device finishes decoding the first image, the first device starts to decode the second image. At the same time, the first device is sending the first image for display. Step S210 and step S211 occur simultaneously, that is, the first device sends the first image to display, and the first device decodes the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
  • the first device includes display means for displaying images. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application.
  • the first device is the aforementioned head-mounted display device 200 all-in-one, which can display images using technologies such as VR, AR, and MR, so that users can feel the virtual environment and provide users with VR/AR/MR experience.
  • This embodiment does not limit the operating system carried by the first device, including but not limited to or other operating systems.
  • Fig. 21 is a flowchart of the image transmission and display method provided by the third embodiment, which specifically includes the following steps:
  • the first device acquires a third image, and divides the third image into a first image and a second image.
  • the third image acquired by the first device may be the first image generated by the first device based on the sensor data fed back by each sensor, or the third image received from the cloud server.
  • the third image There are no restrictions on the source and process of acquisition.
  • the GPU of the first device may divide the third image, that is, divide the entire third image area into small areas that do not cross each other.
  • the third image is divided into two parts, that is, the first image and the first image, as an example for description.
  • the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
  • the first device renders a first image.
  • the first device sends the first image to the display device.
  • the first device After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • display sending If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
  • the first device renders the second image.
  • the first device After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S303 and step S304 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
  • the first device sends the second image to the display device.
  • the first device After rendering the second image, the first device sends the second image for display. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
  • the first device acquires image data of the first image and the second image.
  • the first device displays a third image, where the third image includes the first image and the second image.
  • the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
  • the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image.
  • the image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
  • the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (rendering, displaying, etc.) of different parts of the image can be processed in parallel, thereby Reduce the delay time in the image transmission and display process, and the image can be displayed faster.
  • Embodiment 4 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal. Its schematic process is shown in Figure 22.
  • the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal.
  • the first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
  • the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1
  • the time interval between the first display signal and the second display signal is T2
  • T1 and T2 are equal.
  • the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1.
  • the display device in response to the first vertical synchronization signal and the second vertical synchronization signal, starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
  • the first image signal is an image signal of a rendered first image
  • the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
  • the first device when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
  • the first device before the first device transmits the first image signal to the display apparatus, the first device renders the first image.
  • the first device further includes an image rendering module, the image rendering module is used to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback signal is used for Indicates the vertical synchronization information of the display device.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the first device before the first device transmits the first image signal to the display device, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
  • the first device when the first device transmits the first image signal to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device.
  • the fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
  • both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the first device in response to the second display signal, displays a third image, where the third image includes the first image and the second image.
  • the first device in response to the first image signal, displays the first image. In response to the second image signal, the first device displays the second image.
  • the display device further includes a frame buffer for storing pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • Embodiment 5 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device.
  • the method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 22.
  • a whole image can be divided into multiple parts, and the multiple parts can be sent to display in parallel, so that the display path can be optimized, thereby reducing the waiting time in the process of image transmission and display, The delay time of image transmission and display is further reduced, and images can be displayed faster.
  • the first device before the first device transmits the first image to the display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
  • the first device when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image.
  • the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device
  • the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently.
  • the time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
  • the first device before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
  • the first device when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the first device before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
  • the first device when the first device transmits the first image to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device
  • the sixth time is the third time for the first device to receive the second image transmitted by the second device.
  • the fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
  • the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the first device when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • Embodiment 6 provides an image transmission and display method, which may include: establishing a connection between a first device and a second device.
  • the second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection.
  • the first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device.
  • the display device displays the third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 25.
  • the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then performs the processing of sending the multiple parts of the images in parallel, so that the display path is obtained. Optimization, so as to reduce the waiting time in the process of image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
  • the second device when the first device transmits the first image to the display apparatus, the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image.
  • the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the sixth time is the time taken by the second device to transmit the second image to the first device.
  • the fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device. Its schematic process is shown in Figure 25.
  • the second device before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
  • the second device when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image.
  • the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device
  • the eighth time is from when the second device starts rendering the third image to when the first device ends
  • the time taken for the third image to be transmitted to the display device the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
  • the method further includes: after the second device transmits the first image to the first device through the connection and before the first device transmits the first image to the display device, the first device renders the first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image. Its schematic process is shown in Figure 24.
  • the first device when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image.
  • the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device
  • the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
  • the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
  • the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
  • the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
  • the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image.
  • the first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image.
  • the first image and the second image form a complete frame of the third image.
  • the embodiment of the present application also provides an electronic device, which may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs.
  • Computer-executable instructions are stored in the memory, and the display device is used to display images.
  • the processor executes the instructions, the electronic device can realize any function of the first device in Embodiment 4 or Embodiment 5.
  • the embodiment of the present application also provides a computer storage medium, where a computer program is stored in the storage medium, and the computer program includes executable instructions.
  • the executable instructions When executed by a processor, the processor executes the method described in Embodiment 4 or Operations corresponding to the method provided in Embodiment 5.
  • An embodiment of the present application further provides a computer program product, which, when the computer program product is run on an electronic device, causes the electronic device to execute any possible implementation manner in Embodiment 4 or Embodiment 5.
  • the embodiment of the present application also provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions to make the electronic device implement the fourth or fifth embodiment. any of the possible implementations.
  • the embodiment of the present application also provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the fourth or fifth embodiment.
  • the delay time in the process of image transmission and display can be reduced, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
  • scenarios such as PC screen display images, vehicle-mounted device display images, etc.
  • the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting".
  • the phrases “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Disclosed in the present application are an image transmission and display method and a related device and system. An entire image can be segmented into a plurality of parts for parallel processing of transmission, rendering, and displaying, such that a display pathway is optimized, the waiting duration in the process of image transmission and display is shortened, and the delay duration of image transmission and display is further shortened. Furthermore, in the scenario of AR/VR or MR, MTP delay can be reduced, the adverse reactions of motion sickness such as dizziness and vomiting of a user are effectively relieved, and the use experience is improved.

Description

图像传输与显示方法、相关设备及系统Image transmission and display method, related equipment and system
本申请要求于2021年05月31日提交中国专利局、申请号为202110602850.9、申请名称为“图像传输与显示方法、相关设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110602850.9 and the title of "image transmission and display method, related equipment and system" submitted to the China Patent Office on May 31, 2021, the entire content of which is incorporated by reference in In this application.
技术领域technical field
本申请涉及终端技术领域,尤其涉及一种图像传输与显示方法、相关设备及系统。The present application relates to the field of terminal technology, and in particular to an image transmission and display method, related equipment and system.
背景技术Background technique
随着终端显示技术的发展,增强显示(augmented reality,AR)、虚拟显示(virtual reality,VR)、混合现实(mixed reality,MR)技术的应用场景越来越多。AR设备可以在用户观看现实世界场景的同时,为用户叠加显示虚拟图像,用户还可以与虚拟图像进行交互来实现增强现实的效果。VR设备可以模拟产生一个三维(three-dimensional,3D)的虚拟世界场景,还可以提供在视觉、听觉、触觉或其他感官上的模拟体验,让用户感觉仿佛身历其境。并且,用户也可以与该模拟的虚拟世界场景进行交互。MR融合了AR和VR,可以为用户提供合并现实和虚拟世界后的视界。With the development of terminal display technology, there are more and more application scenarios for enhanced display (augmented reality, AR), virtual display (virtual reality, VR), and mixed reality (mixed reality, MR). AR devices can superimpose and display virtual images for users while viewing real-world scenes, and users can also interact with virtual images to achieve the effect of augmented reality. A VR device can simulate a three-dimensional (3D) virtual world scene, and can also provide a visual, auditory, tactile or other sensory simulation experience to make users feel as if they are in the scene. Moreover, the user can also interact with the simulated virtual world scene. MR combines AR and VR, which can provide users with a vision after merging the real and virtual worlds.
头戴式显示设备,简称头显,是佩戴于用户头部的显示设备,可以为用户提供新的可视化环境。头戴式显示设备可以通过发射光学信号,向用户呈现AR、VR、MR等显示效果。影响AR/VR或MR头戴式显示设备使用体验的一项重要因素为动显(motion to photons,MTP)时延(latency)。MTP时延指的是从用户头部运动到设备屏幕上显示相应的新图像为止的一个周期里所产生的延迟时长总和。A head-mounted display device, referred to as a head-mounted display device, is a display device worn on a user's head, which can provide a new visual environment for the user. Head-mounted display devices can present AR, VR, MR and other display effects to users by emitting optical signals. An important factor affecting the user experience of AR/VR or MR head-mounted display devices is the motion to photos (MTP) latency. MTP latency refers to the sum of the delay time generated in a cycle from the user's head movement to the display of the corresponding new image on the device screen.
MTP时延越大,越影响用户的使用体验,MTP时延太大容易引起用户产生晕动症。晕动症是人体的一种生理反应,指的是当用户体验AR/VR或MR时,由于画面是动态的,大脑认为画面变化是由于身体发生了某种运动,但控制身体平衡与协调的耳前庭器官并没有向大脑反馈身体运动的信号,这导致大脑的感受到的视觉信息和运动上产生不协调,从而导致用户产生眩晕、呕吐的反应,被称为晕动症。目前业界公认的MTP时延低于20毫秒(ms)就可以大幅减少晕动症的发生。The greater the MTP delay, the more it will affect the user experience. If the MTP delay is too large, it will easily cause motion sickness. Motion sickness is a physiological reaction of the human body. It means that when the user experiences AR/VR or MR, because the picture is dynamic, the brain thinks that the change of the picture is due to some kind of movement of the body, but the control of body balance and coordination The ear vestibular organ does not feed back the signal of body movement to the brain, which leads to incoordination between the visual information and movement felt by the brain, which leads to dizziness and vomiting in the user, which is called motion sickness. The current industry-recognized MTP delay of less than 20 milliseconds (ms) can greatly reduce the occurrence of motion sickness.
通过尽可能多的降低MTP时延,可以避免用户产生晕动症,提高用户的使用体验。By reducing the MTP delay as much as possible, users can avoid motion sickness and improve user experience.
发明内容Contents of the invention
本申请提供了一种图像传输与显示方法及相关设备,可以通过将一整副图像分割为多个部分进行传输、渲染及送显的并行处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输及显示的延迟时间。This application provides an image transmission and display method and related equipment, which can optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing image transmission and display. The waiting time in the process further reduces the delay time of image transmission and display.
上述目标和其他目标将通过独立权利要求中的特征来达成。进一步的实现方式在从属权利要求、说明书和附图中体现。The above and other objects are achieved by the features in the independent claims. Further implementations are presented in the dependent claims, the description and the drawings.
第一方面,本申请提供了一种图像传输与显示方法,该方法用于第一设备进行显示,第一设备包括显示装置。该方法可以包括:在第一垂直同步信号和第二垂直同步信号之间,第一设备向显示装置传输第一图像信号。在第一垂直同步信号和第二垂直同步信号之间,第一设备向显示装置传输第二图像信号,第一图像信号与第二图像信号不同步。In a first aspect, the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device. The method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal.
实施第一方面的方法,通过将一整副图像分割为多个部分,将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步 降低图像传输及显示的延迟时间,图像可以更快的显示。Implementing the method of the first aspect, by dividing a whole set of images into multiple parts, and sending these multiple parts to display in parallel, the display channel is optimized, thereby reducing the waiting time in the process of image transmission and display, The delay time of image transmission and display is further reduced, and images can be displayed faster.
结合第一方面,在一种可能的实现方式中,第一设备向显示装置传输第一图像信号,还位于第一显示信号和第二显示信号之间。第一设备向显示装置传输第二图像信号,还位于第一显示信号和第二显示信号之间。With reference to the first aspect, in a possible implementation manner, the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal. The first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
结合第一方面,在一种可能的实现方式中,第一垂直同步信号和第二垂直同步信号的时间间隔为T1,第一显示信号和第二显示信号的时间间隔为T2,T1与T2相等。In combination with the first aspect, in a possible implementation, the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1, the time interval between the first display signal and the second display signal is T2, and T1 is equal to T2 .
结合第一方面,在一种可能的实现方式中,在第一垂直同步信号和第二垂直同步信号之间,显示装置显示插黑图像帧,该插黑图像帧的周期为T3,T3小于T1。With reference to the first aspect, in a possible implementation manner, between the first vertical synchronous signal and the second vertical synchronous signal, the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1 .
结合第一方面,在一种可能的实现方式中,响应于第一垂直同步信号和第二垂直同步信号,显示装置开始显示插黑图像帧。响应于第一显示信号或第二显示信号,显示装置结束显示插黑图像帧,显示装置显示第三图像,第三图像包括第一图像和第二图像。With reference to the first aspect, in a possible implementation manner, in response to the first vertical synchronization signal and the second vertical synchronization signal, the display device starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
结合第一方面,在一种可能的实现方式中,第一图像信号是渲染后的第一图像的图像信号,第二图像信号是渲染后的第二图像的图像信号。即在分片送显第一图像和第二图像之前,还会对第一图像和第二图像分别进行分片渲染。With reference to the first aspect, in a possible implementation manner, the first image signal is an image signal of a rendered first image, and the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
结合第一方面,在一种可能的实现方式中,当第一设备向显示装置传输第一图像信号时,第一设备对第二图像进行渲染。即第一图像的送显步骤与第二图像的渲染步骤并行进行。其中,第一时间为第一设备从开始渲染第一图像到将第二图像传输至显示装置结束所耗费的时间,第二时间为第一设备单独完成渲染第三图像及将第三图像传输至显示装置结束所耗费的时间,第三图像包括第一图像和第二图像,第一时间小于第二时间。With reference to the first aspect, in a possible implementation manner, when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image. Wherein, the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device, and the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently. The time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
结合第一方面,在一种可能的实现方式中,在第一设备向显示装置传输第一图像信号之前,第一设备对第一图像进行渲染。With reference to the first aspect, in a possible implementation manner, before the first device transmits the first image signal to a display apparatus, the first device renders the first image.
结合第一方面,在一种可能的实现方式中,第一设备还包括图像渲染模块,图像渲染模块用于对第一图像和第二图像进行渲染,显示装置向图像渲染模块发送反馈信号,反馈信号用于指示显示装置的垂直同步信息。With reference to the first aspect, in a possible implementation manner, the first device further includes an image rendering module, the image rendering module is configured to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback The signal is used to indicate the vertical synchronization information of the display device.
结合第一方面,在一种可能的实现方式中,在第一设备对第一图像进行渲染之前,第一设备接收第二设备传输的第一图像。在第一设备对第二图像进行渲染之前,第一设备接收第二设备传输的第二图像。即在第一设备分片渲染第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。With reference to the first aspect, in a possible implementation manner, before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
结合第一方面,在一种可能的实现方式中,当第一设备对第一图像进行渲染时,第一设备接收第二设备传输的第二图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第四时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。With reference to the first aspect, in a possible implementation manner, when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device, and the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
结合第一方面,在一种可能的实现方式中,在第一设备向显示装置传输第一图像信号之前,第一设备接收第二设备传输的第一图像。在第一设备向显示装置传输第二图像信号之前,第一设备接收第二设备传输的第二图像。即在第一设备分片送显第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。With reference to the first aspect, in a possible implementation manner, before the first device transmits the first image signal to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
结合第一方面,在一种可能的实现方式中,当第一设备向显示装置传输第一图像信号时,第一设备接收第二设备传输的第二图像。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第六时间为第一设备接收第二设备传输的第三图像到将第三图 像传输至显示装置结束所耗费的时间,第五时间小于第六时间。With reference to the first aspect, in a possible implementation manner, when the first device transmits the first image signal to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device, and the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
结合第一方面,在一种可能的实现方式中,第一图像和第二图像均由第二设备渲染。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。With reference to the first aspect, in a possible implementation manner, both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
结合第一方面,在一种可能的实现方式中,当第二图像由第二设备渲染时,第一设备接收第二设备传输的第一图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。With reference to the first aspect, in a possible implementation manner, when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
结合第一方面,在一种可能的实现方式中,响应于第二显示信号,第一设备显示第三图像,第三图像包括第一图像和第二图像。With reference to the first aspect, in a possible implementation manner, in response to the second display signal, the first device displays a third image, where the third image includes the first image and the second image.
结合第一方面,在一种可能的实现方式中,响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to the first aspect, in a possible implementation manner, in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合第一方面,在一种可能的实现方式中,显示装置还包括帧缓冲区,帧缓冲区用于存储第一图像和第二图像的像素数据。With reference to the first aspect, in a possible implementation manner, the display device further includes a frame buffer, where the frame buffer is used to store pixel data of the first image and the second image.
结合第一方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,第一设备显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to the first aspect, in a possible implementation manner, the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合第一方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to the first aspect, in a possible implementation manner, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
第二方面,本申请提供了一种图像传输与显示方法,该方法用于第一设备进行显示,第一设备包括显示装置。该方法可以包括:第一设备向该显示装置传输第一图像,第一设备向显示装置传输第二图像,该显示装置显示第三图像,其中,第三图像包括第一图像和第二图像。In a second aspect, the present application provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device. The method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image.
实施第二方面的方法,通过将一整副图像分割为多个部分,将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输及显示的延迟时间,图像可以更快的显示。Implementing the method of the second aspect, by dividing a whole set of images into multiple parts, these multiple parts are sent to display in parallel, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, The delay time of image transmission and display is further reduced, and images can be displayed faster.
结合第二方面,在一种可能的实现方式中,在第一设备向显示装置传输第一图像之前,第一设备渲染第一图像。在第一设备向显示装置传输第二图像之前,第一设备渲染第二图像。即在分片送显第一图像和第二图像之前,还会对第一图像和第二图像分别进行分片渲染。With reference to the second aspect, in a possible implementation manner, before the first device transmits the first image to a display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices.
结合第二方面,在一种可能的实现方式中,当第一设备向显示装置传输第一图像信号时,第一设备渲染第二图像。即第一图像的送显步骤与第二图像的渲染步骤并行进行。其中,第一时间为第一设备从开始渲染第一图像到将第二图像传输至显示装置结束所耗费的时间,第二时间为第一设备单独完成渲染第三图像及将第三图像传输至显示装置结束所耗费的时间,第三图像包括第一图像和第二图像,第一时间小于第二时间。With reference to the second aspect, in a possible implementation manner, when the first device transmits the first image signal to the display apparatus, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image. Wherein, the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device, and the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently. The time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time.
结合第二方面,在一种可能的实现方式中,在第一设备渲染第一图像之前,第一设备接收第二设备传输的第一图像。在第一设备渲染第二图像之前,第一设备接收第二设备传输的第二图像。即在第一设备分片渲染第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。With reference to the second aspect, in a possible implementation manner, before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
结合第二方面,在一种可能的实现方式中,当第一设备渲染第一图像时,第一设备接收第二设备传输的第二图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第四时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。With reference to the second aspect, in a possible implementation manner, when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device, and the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time.
结合第二方面,在一种可能的实现方式中,在第一设备向显示装置传输第一图像之前,第一设备接收第二设备传输的第一图像。在第一设备向显示装置传输第二图像之前,第一设备接收第二设备传输的第二图像。即在第一设备分片送显第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。With reference to the second aspect, in a possible implementation manner, before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices.
结合第二方面,在一种可能的实现方式中,当第一设备向显示装置传输第一图像时,第一设备接收第二设备传输的第二图像。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第六时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第五时间小于第六时间。With reference to the second aspect, in a possible implementation manner, when the first device transmits the first image to the display apparatus, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device, and the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device.
结合第二方面,在一种可能的实现方式中,第一图像和第二图像由第二设备渲染。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。With reference to the second aspect, in a possible implementation manner, the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
结合第二方面,在一种可能的实现方式中,当第二图像由第二设备渲染时,第一设备接收第二设备传输的第一图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。With reference to the second aspect, in a possible implementation manner, when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
结合第二方面,在一种可能的实现方式中,显示装置还包括帧缓冲区,帧缓冲区存储有第一图像和第二图像的像素数据。With reference to the second aspect, in a possible implementation manner, the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
结合第二方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,然后显示装置显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to the second aspect, in a possible implementation manner, the first device reads pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合第二方面,在一种可能的实现方式中,该方法还包括:响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to the second aspect, in a possible implementation manner, the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合第二方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to the second aspect, in a possible implementation manner, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
第三方面,本申请提供了一种图像传输与显示方法,该方法可以包括:第一设备与第二设备建立连接。第二设备通过该连接将第一图像传输给第一设备,第二设备通过该连接将第二图像传输给第一设备。第一设备包括显示装置,第一设备将第一图像传输给显示装置,第一设备将第二图像传输给显示装置。显示装置显示第三图像,其中,第三图像包括第一图像和第二图像。In a third aspect, the present application provides an image transmission and display method, and the method may include: establishing a connection between a first device and a second device. The second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection. The first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device. The display device displays the third image, wherein the third image includes the first image and the second image.
实施第三方面的方法,第二设备通过将一整副图像分割为多个部分,然后并行传输给第一设备,第一设备再将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输及显示的延迟时间,图像可以 更快的显示。Implementing the method of the third aspect, the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then processes the multiple parts for display in parallel, so that the display channel It is optimized to reduce the waiting time during image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
结合第三方面,在一种可能的实现方式中,当第一设备将第一图像传输给显示装置时,第二设备通过连接将第二图像传输给第一设备。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第二设备从开始向第一设备传输第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第六时间为第二设备从开始向第一设备传输第三图像到第一设备将第三图像传输至显示装置结束所耗费的时间,第五时间小于第六时间。With reference to the third aspect, in a possible implementation manner, when the first device transmits the first image to the display apparatus, the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device, and the sixth time is the time taken by the second device to transmit the second image to the first device. The fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device.
结合第三方面,在一种可能的实现方式中,在第二设备通过连接将第一图像传输给第一设备之前,第二设备渲染第一图像。在第二设备通过连接将第二图像传输给第一设备之前,第二设备渲染第二图像。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。With reference to the third aspect, in a possible implementation manner, before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively.
结合第三方面,在一种可能的实现方式中,当第一设备接收第二设备传输的第一图像时,第二设备渲染第二图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。With reference to the third aspect, in a possible implementation manner, when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time.
结合第三方面,在一种可能的实现方式中,该方法还包括:在第二设备通过连接将第一图像传输给第一设备之后,第一设备将第一图像传输给显示装置之前,第一设备渲染第一图像。在第二设备通过连接将第二图像传输给第一设备之后,第一设备将第二图像传输给显示装置之前,第一设备渲染第二图像。With reference to the third aspect, in a possible implementation manner, the method further includes: after the second device transmits the first image to the first device through the connection, and before the first device transmits the first image to the display device, the second A device renders a first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image.
结合第三方面,在一种可能的实现方式中,当第二设备通过连接将第二图像传输给第一设备时,第一设备渲染第一图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第二设备从开始向第一设备发送第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第四时间为第二设备开始向第一设备传输第三图像到第一设备将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。With reference to the third aspect, in a possible implementation manner, when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device, and the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time.
结合第三方面,在一种可能的实现方式中,显示装置还包括帧缓冲区,帧缓冲区存储有第一图像和第二图像的像素数据。With reference to the third aspect, in a possible implementation manner, the display device further includes a frame buffer, where pixel data of the first image and the second image are stored in the frame buffer.
结合第三方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,第一设备显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to the third aspect, in a possible implementation manner, the first device reads pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合第三方面,在一种可能的实现方式中,该方法还包括:响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to the third aspect, in a possible implementation manner, the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合第三方面,在一种可能的实现方式中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to the third aspect, in a possible implementation manner, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
第四方面,本申请实施例提供了一种电子设备,该电子设备可以包括:通信装置、显示装置、存储器以及耦合于存储器的处理器,多个应用程序,以及一个或多个程序。存储器中存储有计算机可执行指令,显示装置用于显示图像,处理器执行指令时使得电子设备可以实现如第一方面或第二方面中第一设备所具有的任一功能。In a fourth aspect, the embodiment of the present application provides an electronic device, and the electronic device may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs. Computer-executable instructions are stored in the memory, and the display device is used to display images. When the processor executes the instructions, the electronic device can realize any function of the first device in the first aspect or the second aspect.
第五方面,本申请实施例提供了一种计算机存储介质,该存储介质中存储有计算机程序,该计算机程序包括可执行指令,该可执行指令当被处理器执行时使该处理器执行如第一方面 或第二方面所提供的方法对应的操作。In a fifth aspect, an embodiment of the present application provides a computer storage medium, in which a computer program is stored, and the computer program includes executable instructions, and when executed by a processor, the executable instruction causes the processor to perform the following steps: Operations corresponding to the method provided in the first aspect or the second aspect.
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面或第二方面中任一可能的实现方式。In a sixth aspect, an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute any possible implementation manner in the first aspect or the second aspect.
第七方面,本申请实施例提供了一种芯片系统,该芯片系统可以应用于电子设备,该芯片包括一个或多个处理器,处理器用于调用计算机指令以使得电子设备实现如第一方面或第二方面中任一可能的实现方式。In the seventh aspect, the embodiment of the present application provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions so that the electronic device implements the first aspect or Any possible implementation in the second aspect.
第八方面,本申请实施例提供了一种通信系统,该通信系统包括第一设备和第二设备,该第一设备可以实现如第一方面或第二方面中第一设备所具有的一些功能。In the eighth aspect, the embodiment of the present application provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the first aspect or the second aspect .
实施本申请实施例提供的上述方面,可以降低图像传输及显示过程中的延迟时间,图像可以更快的显示。进一步的,在AR/VR或MR场景下,可以降低MTP时延,有效缓解用户眩晕、呕吐等晕动症的不良反应,提升用户使用体验。可以理解的,本申请提供的上述方法可适用于更多的场景中,例如电子设备投屏显示图像、车载设备显示图像等各类图像传输、显示的场景,通过分片传输、渲染、送显的并行处理方式,加快端到端的显示过程,减少时延,提升用户的观看体验。Implementing the above aspects provided by the embodiments of the present application can reduce the delay time in the process of image transmission and display, and images can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the above-mentioned method provided by this application can be applied to more scenarios, such as various image transmission and display scenarios such as electronic equipment projecting and displaying images, vehicle-mounted equipment displaying images, etc., through fragmented transmission, rendering, and display The parallel processing method speeds up the end-to-end display process, reduces delay, and improves the user's viewing experience.
附图说明Description of drawings
图1为一种头戴式显示设备的结构示意图;FIG. 1 is a schematic structural diagram of a head-mounted display device;
图2为一种头戴式显示设备的使用场景示意图;FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device;
图3为本申请实施例提供的电子设备的硬件结构示意图;FIG. 3 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application;
图4为本申请实施例提供的电子设备的软件结构示意图;FIG. 4 is a schematic diagram of the software structure of the electronic device provided by the embodiment of the present application;
图5为本申请实施例提供的一种通信系统的示意图;FIG. 5 is a schematic diagram of a communication system provided by an embodiment of the present application;
图6为本申请实施例提供的一种流程模块示意图;FIG. 6 is a schematic diagram of a process module provided by an embodiment of the present application;
图7为一种图像显示流程示意图;FIG. 7 is a schematic diagram of an image display process;
图8为本申请实施例提供的一种模块耗时分析示意图;Fig. 8 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application;
图9为本申请实施例提供的一种模块耗时分析示意图;FIG. 9 is a schematic diagram of a module time-consuming analysis provided by the embodiment of the present application;
图10为本申请实施例提供的一种图像分片传输流程示意图;FIG. 10 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application;
图11为本申请实施例提供的一种图像划分示意图;FIG. 11 is a schematic diagram of image division provided by the embodiment of the present application;
图12为本申请实施例提供的一种图像分片送显示意图;Fig. 12 is an image slice sending display diagram provided by the embodiment of the present application;
图13为本申请实施例提供的一种图像分片传输流程示意图;FIG. 13 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application;
图14为本申请实施例提供的一种渲染过程示意图;FIG. 14 is a schematic diagram of a rendering process provided by an embodiment of the present application;
图15为本申请实施例提供的一种图像分片传输流程示意图;FIG. 15 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application;
图16为本申请实施例提供的一种通信系统的示意图;FIG. 16 is a schematic diagram of a communication system provided by an embodiment of the present application;
图17为本申请实施例提供的一种图像分片传输流程示意图;FIG. 17 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application;
图18为本申请实施例提供的一种图像分片传输流程示意图;FIG. 18 is a schematic diagram of an image slice transmission process provided by an embodiment of the present application;
图19为本申请实施例提供的一种图像传输与显示方法的流程图;FIG. 19 is a flow chart of an image transmission and display method provided by an embodiment of the present application;
图20为本申请实施例提供的一种图像传输与显示方法的流程图;FIG. 20 is a flow chart of an image transmission and display method provided by an embodiment of the present application;
图21为本申请实施例提供的一种图像传输与显示方法的流程图;FIG. 21 is a flow chart of an image transmission and display method provided by an embodiment of the present application;
图22为本申请实施例提供的一种图像传输与显示方法的过程示意图;Fig. 22 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application;
图23为本申请实施例提供的一种图像传输与显示方法的过程示意图;Fig. 23 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application;
图24为本申请实施例提供的一种图像传输与显示方法的过程示意图;Fig. 24 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application;
图25为本申请实施例提供的一种图像传输与显示方法的过程示意图;Fig. 25 is a schematic diagram of the process of an image transmission and display method provided by the embodiment of the present application;
图26为本申请实施例提供的一种图像传输与显示方法的过程示意图。FIG. 26 is a schematic diagram of a process of an image transmission and display method provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。The technical solutions in the embodiments of the present application will be described clearly and in detail below in conjunction with the accompanying drawings. Among them, in the description of the embodiments of this application, unless otherwise specified, "/" means or, for example, A/B can mean A or B; "and/or" in the text is a way to describe associated objects An association relationship means that there may be three kinds of relationships. For example, A and/or B may mean that A exists alone, A and B exist simultaneously, and B exists independently.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。Hereinafter, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include one or more of these features. In addition, in the description of the embodiments of the present application, "plurality" refers to two or more than two.
本申请实施例涉及的电子设备可以为头戴式显示设备,佩戴在用户头部,该头戴式显示设备可以实现AR、VR、MR等显示效果。该头戴式显示设备的外观形式可以为眼镜、头盔、眼罩等等,本申请实施例对此不作限制。不限于以头戴式显示设备为示例,本申请实施例涉及的电子设备还可以是包含显示屏的其他设备,如手机、个人计算机(personal computer,PC)、平板电脑(portable android device,PAD)、智能电视或车载设备等等,本申请实施例对电子设备的类型不作限制。The electronic device involved in the embodiment of the present application may be a head-mounted display device, worn on the user's head, and the head-mounted display device may realize display effects such as AR, VR, and MR. The appearance form of the head-mounted display device may be glasses, a helmet, an eye mask, etc., which is not limited in this embodiment of the present application. Not limited to the example of a head-mounted display device, the electronic device involved in the embodiment of the present application may also be other devices including a display screen, such as a mobile phone, a personal computer (personal computer, PC), a tablet computer (portable android device, PAD) , a smart TV, or a vehicle-mounted device, etc., the embodiment of the present application does not limit the type of the electronic device.
电子设备显示的虚拟对象可以和用户产生交互。在一些实施例中,用户可以直接通过手部/手臂运动、头部运动、眼球转动等体感交互方式来与电子设备显示的虚拟对象进行交互。在另一些实施例中,电子设备可以配合手持设备一起使用,用户可以通过对手持设备的操控来和电子设备显示的虚拟对象进行交互。该手持设备例如可以是手柄、控制器、陀螺鼠标、手写笔或者其他手持计算设备。手持设备可配置有多种传感器例如加速度传感器、陀螺仪传感器、磁传感器等,可用于检测和追踪自身运动。手持设备可以通过无线保真(wireless fidelity,Wi-Fi)、蓝牙(bluetooth)、近场通信(near field communication,NFC)、ZigBee等近距离传输技术和电子设备通信,还可以通过通用串行总线(universal serial bus,USB)接口或自定义接口等有线连接进行通信。The virtual object displayed by the electronic device can interact with the user. In some embodiments, the user can directly interact with the virtual object displayed on the electronic device through somatosensory interaction methods such as hand/arm movement, head movement, and eyeball rotation. In other embodiments, the electronic device can be used together with the handheld device, and the user can interact with the virtual object displayed on the electronic device by manipulating the handheld device. The handheld device may be, for example, a handle, controller, gyro mouse, stylus, or other handheld computing device. Handheld devices can be configured with various sensors such as acceleration sensors, gyroscope sensors, magnetic sensors, etc., which can be used to detect and track their own motion. Handheld devices can communicate with electronic devices through short-distance transmission technologies such as wireless fidelity (Wi-Fi), Bluetooth (bluetooth), near field communication (NFC), ZigBee, etc. (universal serial bus, USB) interface or custom interface and other wired connections for communication.
参见图1,图1示出了一种头戴式显示设备200的结构示意图。Referring to FIG. 1 , FIG. 1 shows a schematic structural diagram of a head-mounted display device 200 .
如图1所示,头戴式显示设备200可以包括左透镜201、右透镜202、左显示屏203、右显示屏204、左相机205、右相机206和惯性测量单元(inertial measurement unit,IMU)207等中的部分或全部。As shown in Figure 1, the head-mounted display device 200 may include a left lens 201, a right lens 202, a left display 203, a right display 204, a left camera 205, a right camera 206 and an inertial measurement unit (inertial measurement unit, IMU) Some or all of 207 etc.
其中,左透镜201、右透镜202都可以是凸透镜、菲涅尔透镜或其他类型的一个或多个透明镜片。通过透镜的折射光线,可以将显示屏上的画面成像拉近到用户的视网膜位置,使用户的眼睛看清几乎贴在眼前的显示屏上的图像。左显示屏203、右显示屏204用于显示真实世界或虚拟世界或虚实世界结合的图像。左相机205、右相机206用于采集真实世界图像。IMU207是用来检测和测量加速度与旋转运动的传感器,可以包括加速度计、角速度计(或称陀螺仪)等,加速度计是敏感轴向加速度并转换成可用输出信号的传感器,陀螺仪是敏感运动体相对于惯性空间的运动角速度的传感器。Wherein, both the left lens 201 and the right lens 202 may be convex lenses, Fresnel lenses or one or more transparent lenses of other types. Through the refracted light of the lens, the image on the display screen can be brought closer to the position of the user's retina, so that the user's eyes can clearly see the image on the display screen that is almost attached to the front of the user's eyes. The left display screen 203 and the right display screen 204 are used to display images of the real world or virtual world or a combination of virtual and real worlds. The left camera 205 and the right camera 206 are used to collect real-world images. IMU207 is a sensor used to detect and measure acceleration and rotational motion, which can include accelerometers, angular velocity meters (or gyroscopes), etc. The accelerometer is a sensor that is sensitive to axial acceleration and converts it into a usable output signal. The gyroscope is a sensor that is sensitive to motion The sensor of the angular velocity of the body relative to the inertial space.
参见图2,图2是本申请实施例涉及的一种头戴式显示设备200的使用情景示意图。当用户佩戴头戴式显示设备200时,用户眼睛可以看到该头戴式显示设备200显示屏呈现的图像210。该头戴式显示设备200显示屏呈现的图像210是一种可视化虚拟环境,可以是虚拟世界图像,可以是真实世界图像,或者是虚实世界结合的图像。对于VR头戴式显示设备来 说,通过封闭用户对外界的视觉、听觉,并使左、右显示屏分别显示左、右眼视角的图像,这两个图像会有位置上的偏差,所以用户左右眼睛获取到的图像信息存在差异,即视差,用户大脑会下意识的根据视差计算物体离身体的距离,从而使得用户观看到的图像具有立体感和景深感,引导用户产生一种身在虚拟环境中的感觉,形成如临其境的超逼真沉浸体验。Referring to FIG. 2 , FIG. 2 is a schematic diagram of a usage scenario of a head-mounted display device 200 according to an embodiment of the present application. When the user wears the head-mounted display device 200 , the user's eyes can see the image 210 presented on the display screen of the head-mounted display device 200 . The image 210 presented on the display screen of the head-mounted display device 200 is a visualized virtual environment, which may be a virtual world image, a real world image, or a combination of virtual and real world images. For VR head-mounted display devices, by closing the user's vision and hearing to the outside world, and making the left and right display screens display images from the perspective of the left and right eyes respectively, the two images will have positional deviations, so the user There is a difference in the image information obtained by the left and right eyes, that is, parallax. The user's brain will subconsciously calculate the distance between the object and the body according to the parallax, so that the image viewed by the user has a sense of three-dimensionality and depth of field, and guides the user to have a feeling of being in a virtual environment. The feeling in the environment forms a hyper-realistic immersive experience like being there.
在本申请实施例中,电子设备提供可视化虚拟环境是指,电子设备利用AR/VR或MR等显示技术渲染并显示一个或多个虚拟对象组成的虚拟图像,或真实对象与虚拟对象结合的图像。该虚拟对象可以是电子设备本身利用计算机图形技术、计算机仿真技术等生成的,也可以是其他电子设备利用计算机图形技术、计算机仿真技术等生成后发送给该电子设备的。其他电子设备可以是服务器,还可以是与电子设备连接或者配对的手机、电脑等。虚拟对象也可以被称为虚拟图像或虚拟元素。虚拟对象可以是二维的,也可以是三维的。虚拟对象是假的而非物理世界真实存在的物体。虚拟对象可以是仿照真实的物理世界存在的物体的虚拟物体,从而给用户带来沉浸式体验。虚拟对象可包括虚拟动物,虚拟人物,虚拟树木,虚拟建筑物、虚拟的标签、图标、图片或者视频等等。与之对应的真实对象是指用户以及电子设备当前所在的真实的物理环境或物理空间中存在的物体。真实对象可包括动物、人物、树木、建筑物等等。In this embodiment of the present application, the provision of a visual virtual environment by an electronic device means that the electronic device renders and displays a virtual image composed of one or more virtual objects, or an image combining real objects and virtual objects by using display technologies such as AR/VR or MR. . The virtual object may be generated by the electronic device itself using computer graphics technology, computer simulation technology, etc., or may be generated by other electronic devices using computer graphics technology, computer simulation technology, etc. and sent to the electronic device. Other electronic devices may be servers, and may also be mobile phones, computers, etc. connected or paired with the electronic devices. Virtual objects may also be referred to as virtual images or virtual elements. Virtual objects can be two-dimensional or three-dimensional. Virtual objects are fake objects that do not exist in the physical world. The virtual object may be a virtual object imitating an object existing in a real physical world, thereby bringing an immersive experience to the user. Virtual objects may include virtual animals, virtual characters, virtual trees, virtual buildings, virtual labels, icons, pictures or videos and so on. The corresponding real object refers to an object existing in a real physical environment or a physical space where the user and the electronic device are currently located. Real objects may include animals, people, trees, buildings, and the like.
对于VR设备来说,其产品形态可以为一体机,也可以为分体机。For VR equipment, its product form can be an all-in-one machine or a split machine.
一体机是指具备独立处理器的设备,具备独立运算、输入和输出的功能,没有连线束缚,自由度更高。An all-in-one machine refers to a device with an independent processor, which has independent computing, input and output functions, without the constraints of connections, and has a higher degree of freedom.
分体机是指显示设备与主机分离,显示设备主要用于显示图像,主机主要用于运算处理。由于分体机的主机处理系统与头戴式显示设备分开,主机可以采用更高性能的处理器和散热系统。因此分体机的优势在于显示设备与主机的功能分配更合理,处理性能更强,资源更丰富。但是分体机需要考虑跨设备、跨平台之间的兼容性,如硬件平台、软件系统、操作系统、应用软件等。The split machine means that the display device is separated from the host computer, the display device is mainly used for displaying images, and the host computer is mainly used for calculation processing. Since the host processing system of the split machine is separated from the head-mounted display device, the host can adopt a higher-performance processor and heat dissipation system. Therefore, the advantage of the split machine is that the function allocation between the display device and the host is more reasonable, the processing performance is stronger, and the resources are more abundant. However, the split machine needs to consider cross-device and cross-platform compatibility, such as hardware platform, software system, operating system, application software, etc.
下面对本申请实施例中提供的电子设备100的硬件结构进行示例性描述。The hardware structure of the electronic device 100 provided in the embodiment of the present application is described as an example below.
本申请实施例提供的电子设备100可以为前述的头戴式显示设备200,也可以是后文的主机310或其他电子设备。电子设备100可以包括但不限于手机、台式电脑、笔记本电脑、平板电脑、智慧屏(智能电视)、桌面型计算机、膝上型计算机、手持计算机、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、游戏机、智能家居设备、物联网或车联网设备等。本申请实施例对电子设备100的具体类型不作任何限制。The electronic device 100 provided in the embodiment of the present application may be the aforementioned head-mounted display device 200 , or may be the host 310 or other electronic devices described later. The electronic device 100 may include, but is not limited to, a mobile phone, a desktop computer, a notebook computer, a tablet computer, a smart screen (smart TV), a desktop computer, a laptop computer, a handheld computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC) ), netbooks, and cellular phones, personal digital assistants (PDAs), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, Wearable devices, in-vehicle devices, game consoles, smart home devices, Internet of Things or Internet of Vehicles devices, etc. The embodiment of the present application does not impose any limitation on the specific type of the electronic device 100 .
图3为本申请实施例提供的电子设备100的硬件结构示意图。在图3中,以电子设备100是头戴式显示设备200为例进行说明,当电子设备100为手机等其他设备时,可以增加或减少部分硬件结构。FIG. 3 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application. In FIG. 3 , the electronic device 100 is a head-mounted display device 200 as an example for illustration. When the electronic device 100 is other devices such as mobile phones, part of the hardware structure may be added or reduced.
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示装置194,以及眼动跟踪模组195等。其中传感器模块180可以用于获取用户的姿态,可以包括压力传感器180A,陀螺仪传感器180B,加速度传感器180C,距离传感器180D,触摸传感器 180E等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, a mobile communication module 150, Wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display device 194, and eye tracking module 195 Wait. The sensor module 180 can be used to acquire the user's posture, and can include a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E, and the like.
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110通常用于控制电子设备100的整体操作,可以包括一个或多个处理单元,例如:处理器110可以包括中央处理器(central processing unit,CPU),应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The processor 110 is generally used to control the overall operation of the electronic device 100, and may include one or more processing units, for example: the processor 110 may include a central processing unit (central processing unit, CPU), an application processor (application processor, AP) , modem processor, graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU), controller, memory, video codec , digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By referring to the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface, etc.
在一些实施例中,处理器110可以基于不同帧率对不同对象进行渲染,比如,对近景对象使用高帧率渲染,对远景对象使用低帧率进行渲染。In some embodiments, the processor 110 may render different objects based on different frame rates, for example, use a high frame rate for rendering for nearby objects, and use a low frame rate for rendering for distant objects.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。 处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180E,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180E,使处理器110与触摸传感器180E通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be respectively coupled to the touch sensor 180E, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180E through the I2C interface, so that the processor 110 and the touch sensor 180E communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听音频的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of listening to audio through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听音频的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of listening to audio through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音频的功能。In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing audio through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示装置194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示装置194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display device 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display device 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示装置194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display device 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如手机、电脑、VR/AR或MR设备等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as mobile phones, computers, VR/AR or MR devices, etc. The USB interface may be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit video and audio high-speed data.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示装置194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量, 电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display device 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100可以包含无线通信功能,比如,头戴式显示设备200可以从其它电子设备(比如VR主机或VR服务器)接收渲染后的图像进行显示,或者,接收未渲染的图像然后处理器110对图像进行渲染并显示。无线通信功能可以通过天线(未示出)、移动通信模块150,无线通信模块160,调制解调处理器(未示出)以及基带处理器(未示出)等实现。The electronic device 100 may include a wireless communication function. For example, the head-mounted display device 200 may receive rendered images from other electronic devices (such as a VR host or a VR server) for display, or receive unrendered images and then the processor 110 The image is rendered and displayed. The wireless communication function can be realized by an antenna (not shown), a mobile communication module 150, a wireless communication module 160, a modem processor (not shown), and a baseband processor (not shown).
天线用于发射和接收电磁波信号。电子设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antennas are used to transmit and receive electromagnetic wave signals. Multiple antennas may be included in the electronic device 100, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antennas can be multiplexed as diversity antennas for wireless LANs. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/第四代(4th generation,4G)网络/第五代(5th generation,5G)网络等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide the second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation network applied on the electronic device 100. (5th generation, 5G) network and other wireless communication solutions. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave and radiate it through the antenna. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示装置194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display device 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave and radiate it through the antenna.
在一些实施例中,电子设备100的天线和移动通信模块150、无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning  system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna of the electronic device 100 is coupled to the mobile communication module 150 and the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc. GNSS can include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou satellite navigation system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示装置194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示装置194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display device 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display device 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
在本申请实施例中,显示装置194用于显示图像,视频等。其中,显示装置194可以用于呈现一个或多个虚拟对象,从而使得电子设备100为用户提供虚拟现实场景。在一些实施例中,电子设备100可以包括1个或N个显示装置194,N为大于1的正整数。In the embodiment of the present application, the display device 194 is used to display images, videos and the like. Wherein, the display device 194 may be used to present one or more virtual objects, so that the electronic device 100 provides a virtual reality scene for the user. In some embodiments, the electronic device 100 may include 1 or N display devices 194 , where N is a positive integer greater than 1.
显示装置194呈现虚拟对象的方式可包括以下一种或多种:The manner in which the display device 194 presents the virtual object may include one or more of the following:
1.在一些实施例中,显示装置194可以包括显示屏,显示屏可包括显示面板。显示面板可以用于显示实体对象和/或虚拟对象,从而为用户呈现立体的虚拟环境。用户可以从显示面板上看到该虚拟对象,体验虚拟现实场景。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。1. In some embodiments, the display device 194 may include a display screen, and the display screen may include a display panel. The display panel can be used to display physical objects and/or virtual objects, so as to present a three-dimensional virtual environment for users. The user can see the virtual object on the display panel and experience the virtual reality scene. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
2.在一些实施例中,显示装置194可包括用于将光学信号(例如光束)直接投射到用户视网膜上的光学装置。显示装置194可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,用户可以通过该光学装置投射出的光学信号直接看到虚拟对象,感受到立体的虚拟环境,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。在一个示例中,该光学装置可以是微型投影仪等等。2. In some embodiments, the display device 194 may include optics for projecting an optical signal (eg, a light beam) directly onto the user's retina. The display device 194 can convert the real pixel image display into a near-eye projection virtual image display through one or several optical devices such as reflective mirrors, transmissive mirrors, or optical waveguides, and users can directly use the optical signals projected by the optical device. See virtual objects, feel the three-dimensional virtual environment, realize virtual interactive experience, or realize the interactive experience combining virtual and reality. In one example, the optical device may be a pico projector or the like.
电子设备中显示装置194的数量可以是两个,分别对应用户的两只眼睛。这两个显示装置上显示的内容可以独立显示。这两个显示装置上可以显示有视差的图像来提高图像的立体感。在一些可能的实施例中,电子设备中显示装置194的数量也可以是一个,用户的两只眼睛观看同一个图像。The number of display devices 194 in the electronic device may be two, corresponding to the two eyes of the user respectively. The content displayed on the two display devices can be displayed independently. Images with parallax can be displayed on the two display devices to improve the stereoscopic effect of the images. In some possible embodiments, the number of display device 194 in the electronic device may also be one, and the user's two eyes watch the same image.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示装置194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display device 194 and the application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。摄像头193可包括但不限于传统彩色摄像头(RGB camera)、深度摄像头(RGB depth camera)、动态视觉传感器(dynamic vision sensor,DVS)相机等。在一些实施例中,摄像头193可以为深度摄像头。深度摄像头可以采集真实环境的空间信息。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1. The camera 193 may include, but not limited to, a traditional color camera (RGB camera), a depth camera (RGB depth camera), a dynamic vision sensor (dynamic vision sensor, DVS) camera, and the like. In some embodiments, camera 193 may be a depth camera. The depth camera can collect the spatial information of the real environment.
在一些实施例中,摄像头193可以采集包括真实对象的图像,处理器110可以将摄像头193采集的真实对象的图像与虚拟对象的图像融合,通过显示装置194显示融合得到的图像。In some embodiments, the camera 193 may capture an image including a real object, and the processor 110 may fuse the image of the real object captured by the camera 193 with the image of the virtual object, and display the fused image through the display device 194 .
在一些实施例中,摄像头193可以采集用户的手部图像或者身体图像,处理器110可用于对摄像头193采集到的图像进行分析,从而识别用户输入的手部动作或身体动作。In some embodiments, the camera 193 may collect hand images or body images of the user, and the processor 110 may be configured to analyze the images collected by the camera 193 to recognize hand motions or body motions input by the user.
在一些实施例中,摄像头193可以和红外设备(如红外发射器)配合使用来检测用户的眼部动作,例如眼球注视方向、眨眼操作、注视操作等等,从而实现眼球追踪(eye tracking)。In some embodiments, the camera 193 can be used in conjunction with an infrared device (such as an infrared emitter) to detect the user's eye movements, such as eye gaze direction, blinking operation, gaze operation, etc., thereby realizing eye tracking (eye tracking).
在一些实施例中,电子设备100还可以包括眼动跟踪模组195,眼动跟踪模组195用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。In some embodiments, the electronic device 100 may further include an eye-tracking module 195, which is used to track the movement of the human eye, and then determine the gaze point of the human eye. For example, the position of the pupil can be located by image processing technology, the coordinates of the center of the pupil can be obtained, and then the gaze point of the person can be calculated.
内部存储器121可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。The internal memory 121 may be used to store computer-executable program codes including instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 . The internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
在本申请一些实施例中,内部存储器121可以用于存储一个或多个应用的应用程序,该应用程序包括指令。当该应用程序被处理器110执行时,使得所述电子设备100生成用于呈现给用户的内容。示例性的,该应用可以包括用于管理头戴式显示设备200的应用、游戏应用、会议应用、视频应用、桌面应用或其他应用等等。In some embodiments of the present application, the internal memory 121 may be used to store application programs of one or more applications, and the application programs include instructions. When the application program is executed by the processor 110, the electronic device 100 is made to generate content for presentation to the user. Exemplarily, the application may include an application for managing the head-mounted display device 200, a game application, a meeting application, a video application, a desktop application or other applications, and the like.
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
随机存取存储器具有读取/写入速度快,易失性的特性。易失性指的是一旦断电,RAM中所存储的数据将随之消失。通常情况下,随机存取存储器静态功耗极低,运行功耗相对较大。RAM中的数据即为内存数据,可随时被读取,断电即消失。Random access memory has the characteristics of fast read/write speed and volatility. Volatile means that once the power is turned off, the data stored in RAM will disappear. Usually, the static power consumption of the random access memory is extremely low, and the operating power consumption is relatively large. The data in RAM is memory data, which can be read at any time and disappears when the power is turned off.
非易失性存储器具有非易失性、存储数据稳定的特性。非易失性是指断电后,其所存储的数据不会消失,可以长时间断电保存数据。NVM中的数据包括应用数据,可以长时间稳定存储在NVM中。应用数据是指应用程序或服务进程运行过程中写入的内容,例如拍照类应用获取到的照片或视频、文档类应用中用户编辑的文本等等。Non-volatile memory has the characteristics of non-volatility and stable data storage. Non-volatile means that the stored data will not disappear after the power is turned off, and the data can be saved for a long time without power. The data in NVM includes application data, which can be stored stably in NVM for a long time. Application data refers to the content written during the running of an application or service process, such as photos or videos obtained by a camera application, text edited by a user in a document application, and so on.
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等。Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.
非易失性存储器可以包括磁盘存储器件(magnetic disk storage)、快闪存储器(flash memory)等。The non-volatile memory may include magnetic disk storage, flash memory, and the like.
磁盘存储器件是以磁盘为存储介质的存储器,具有存储容量大、数据传输率高、存储数据可长期保存等特点。The disk storage device is a memory with a disk as the storage medium, which has the characteristics of large storage capacity, high data transmission rate, and long-term storage of stored data.
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universal flash storage,UFS)、嵌入式多媒体存储卡(embedded multi media Card,eMMC)等。According to the operating principle, flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. According to the potential order of storage cells, it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc., can include universal flash storage (English: universal flash storage, UFS) according to storage specifications , embedded multimedia memory card (embedded multi media Card, eMMC), etc.
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。The random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。The non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。The external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 . The external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
电子设备100可以包括一个或多个按键190,这些按键190可以控制电子设备,为用户提供访问电子设备100上的功能。按键190的形式可以是按钮、开关、刻度盘等机械式案件,也可以是触摸或近触摸式传感设备(如触摸传感器)。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。按键190可以包括开机键,音量键等。The electronic device 100 may include one or more keys 190 , and these keys 190 may control the electronic device and provide a user with access to functions on the electronic device 100 . The key 190 may be in the form of a mechanical case such as a button, a switch, or a dial, or may be a touch or near-touch sensing device (such as a touch sensor). The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 . The keys 190 may include a power key, a volume key and the like.
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于电子设备100不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the electronic device 100 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,通知等。The indicator 192 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, notifications and the like.
电子设备100还可以包括其他输入输出接口,可以通过合适的输入输出接口将其他装置连接到电子设备100。组件例如可以包括音频/视频插孔,数据连接器等。The electronic device 100 may also include other input and output interfaces, and other devices may be connected to the electronic device 100 through a suitable input and output interface. Components may include, for example, audio/video jacks, data connectors, and the like.
电子设备100上装备有一个或多个传感器,包括但不限于压力传感器180A,陀螺仪传感 器180B,加速度传感器180C,距离传感器180D,触摸传感器180E等。The electronic device 100 is equipped with one or more sensors, including but not limited to a pressure sensor 180A, a gyroscope sensor 180B, an acceleration sensor 180C, a distance sensor 180D, a touch sensor 180E and the like.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有触摸操作作用于电子设备100,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors. When a touch operation acts on the electronic device 100, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B还可以用于导航,体感游戏场景,相机防抖等。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can also be used for navigation, somatosensory game scenes, camera anti-shake and so on.
加速度传感器180C可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于体感游戏场景,横竖屏切换,计步器等应用。The acceleration sensor 180C can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and can be used in somatosensory game scenes, horizontal and vertical screen switching, pedometers and other applications.
在本申请的一些实施例中,电子设备100可以根据加速度传感器、陀螺仪传感器等来跟踪用户头部的移动。In some embodiments of the present application, the electronic device 100 may track the movement of the user's head according to an acceleration sensor, a gyroscope sensor, and the like.
距离传感器180D,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180D测距以实现快速对焦。The distance sensor 180D is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180D for distance measurement to achieve fast focusing.
触摸传感器180E,也称“触控器件”。触摸传感器180E用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示装置194提供与触摸操作相关的视觉输出。The touch sensor 180E is also called "touch device". The touch sensor 180E is used to detect a touch operation acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation may be provided through the display device 194 .
在本申请实施例中,处理器110可以用于根据电子设备100采集到传感器的数据,来确定用户通过电子设备100的显示装置看到的内容。GPU用于根据从处理器110处获取到的数据(例如应用程序提供的数据)执行数学和几何运算,利用计算机图形技术、计算机仿真技术等来渲染图像,确定用于在电子设备100上显示的图像。在一些实施例中,GPU可以将校正或预失真添加到图像的渲染过程中,以补偿或校正由电子设备100的光学组件所引起的失真。In the embodiment of the present application, the processor 110 may be configured to determine the content that the user sees through the display device of the electronic device 100 according to the sensor data collected by the electronic device 100 . The GPU is used to perform mathematical and geometric operations according to the data obtained from the processor 110 (for example, data provided by the application program), render images using computer graphics technology, computer simulation technology, etc., and determine the image for display on the electronic device 100 image. In some embodiments, the GPU may add correction or pre-distortion to the rendering process of the image to compensate or correct the distortion caused by the optical components of the electronic device 100 .
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的
Figure PCTCN2022091722-appb-000001
系统为例,示例性说明电子设备100的软件结构。
The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. The embodiment of this application uses a layered architecture
Figure PCTCN2022091722-appb-000001
The system is taken as an example to illustrate the software structure of the electronic device 100 .
图4是本申请实施例的电子设备100的软件结构框图。FIG. 4 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,可以将
Figure PCTCN2022091722-appb-000002
系统分为五层,从上至下分别为应用程序层(applications),应用程序框架层(applications framework),系统库(native libraries)和安卓运行时(
Figure PCTCN2022091722-appb-000003
runtime),硬件抽象层(hardware abstract layer,HAL),以及内核层(kernel)。
The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, the
Figure PCTCN2022091722-appb-000002
The system is divided into five layers, from top to bottom are applications layer (applications), application framework layer (applications framework), system library (native libraries) and Android runtime (
Figure PCTCN2022091722-appb-000003
runtime), hardware abstraction layer (hardware abstract layer, HAL), and kernel layer (kernel).
应用程序层可以包括一系列应用程序包。应用程序包可以包括相机,图库,游戏,WLAN,蓝牙,音乐,视频等应用程序。The application layer can consist of a series of application packages. Application packages can include applications such as Camera, Gallery, Games, WLAN, Bluetooth, Music, Video, etc.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
应用程序框架层可以包括窗口管理器(window manager),活动管理器(activity manager),显示管理器(display manage),资源管理器(resource manager),输入管理器(input manager),通知管理器(notification manager),视图系统(views)等。The application framework layer can include window manager, activity manager, display manager, resource manager, input manager, notification manager ( notification manager), view system (views), etc.
窗口管理器用于管理窗口程序。窗口管理器可以用于绘制窗口的大小、位置区域等,控制显示或隐藏窗口,管理多个窗口的显示顺序,还可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can be used to draw the size and location area of the window, control the display or hide of the window, manage the display order of multiple windows, obtain the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
活动管理器用于管理应用程序的活动的生命周期,比如对创建、后台、销毁等活动进程进行管理。The activity manager is used to manage the life cycle of the application's activities, such as managing activity processes such as creation, background, and destruction.
显示管理器用于管理应用程序的显示的生命周期,它可以决定如何根据当前连接的物理显示设备控制其逻辑显示,并且在状态更改时,向系统和应用程序发送通知等等。The display manager is used to manage the display life cycle of the application. It can decide how to control its logical display according to the currently connected physical display device, and send notifications to the system and applications when the state changes, and so on.
输入管理器用于监听并统一管理输入事件,例如当检测到用户使用手持控制器进行输入时,或传感器检测到用户的运动数据,输入管理器可以监听到系统的调用,并将监听到的输入事件进一步转发或者处理。The input manager is used to monitor and manage input events in a unified manner. For example, when it is detected that the user uses the handheld controller to input, or the sensor detects the user's motion data, the input manager can monitor the call of the system and send the monitored input event For further forwarding or processing.
资源管理器为应用程序提供各种非代码资源的访问,比如本地化字符串,图标,图片,布局文件,视频文件等等。资源管理接口类ResourceImpl可以作为对外的资源管理接口,通过该接口类,可以实现对应用资源的更新。The resource manager provides applications with access to various non-code resources, such as localized strings, icons, images, layout files, video files, and so on. The resource management interface class ResourceImpl can be used as an external resource management interface, through which application resources can be updated.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,通知的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. The view system can be used to build applications. A display interface can consist of one or more views. For example, the notification display interface may include a view for displaying text and a view for displaying pictures.
Figure PCTCN2022091722-appb-000004
Runtime包括核心库和虚拟机。
Figure PCTCN2022091722-appb-000005
runtime负责安卓系统的调度和管理。
Figure PCTCN2022091722-appb-000004
Runtime includes core library and virtual machine.
Figure PCTCN2022091722-appb-000005
The runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application program layer and the application program framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库(native libraries)可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),二维图形引擎(例如:SGL)等。System libraries (native libraries) can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了二维(2-Dimensional,2D)和三维(3-Dimensional,3D)图层的融合。表面管理器可以获取SurfaceFlinger服务,SurfaceFlinger服务是图形用户界面(graphical user interface,GUI)的核心,负责将所有应用的图形数据按照顺序混合并输出到缓冲流。The surface manager is used to manage the display subsystem, and provides fusion of two-dimensional (2-Dimensional, 2D) and three-dimensional (3-Dimensional, 3D) layers for multiple applications. The surface manager can obtain the SurfaceFlinger service. The SurfaceFlinger service is the core of the graphical user interface (graphical user interface, GUI), and is responsible for mixing and outputting the graphics data of all applications to the buffer stream in order.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing and layer processing, etc.
二维图形引擎是2D绘图的绘图引擎。The 2D graphics engine is a drawing engine for 2D drawing.
硬件抽象层(hardware abstract layer,HAL)是位于操作系统内核与硬件电路之间的接口层,其目的在于将硬件抽象化,它可以对Linux内核驱动程序进行封装,向上提供标准的接口,隐藏较低级别的驱动程序的实现细节。HAL可以包含硬件合成器(hardware composer,HWC),帧缓冲区(frame buffer),传感器接口,蓝牙接口,WiFi接口,音视频接口等等。The hardware abstract layer (hardware abstract layer, HAL) is the interface layer between the operating system kernel and the hardware circuit. Its purpose is to abstract the hardware. It can encapsulate the Linux kernel driver, provide a standard interface upward, and hide Low-level driver implementation details. HAL can include hardware composer (hardware composer, HWC), frame buffer (frame buffer), sensor interface, Bluetooth interface, WiFi interface, audio and video interface, etc.
硬件合成器(hardware composer,HWC)用于对SurfaceFlinger合成的图形数据缓冲流,进行最终的合成显示。The hardware composer (HWC) is used to buffer the graphics data stream synthesized by SurfaceFlinger for final composite display.
帧缓冲区(frame buffer)是SurfaceFlinger服务合成的图形缓冲流,SurfaceFlinger服务通过向帧缓冲区写入内容来绘制图像。The frame buffer (frame buffer) is a graphics buffer stream synthesized by the SurfaceFlinger service. The SurfaceFlinger service draws images by writing content to the frame buffer.
传感器接口可以用于从传感器驱动中获取相应的数据。The sensor interface can be used to obtain the corresponding data from the sensor driver.
内核层(kernel)是硬件和软件之间的层,用于提供核心系统服务,比如安全、内存管理、进程管理、网络协议栈、驱动模型等。内核层中可以包含显示控制驱动(display controller drive),传感器驱动(sensor driver),摄像头驱动,音频驱动,视频驱动等。驱动是通过总线与硬件设备进行通信,控制硬件进入各种工作状态,获取器件相关寄存器的值,从而得到设备的状态。比如通过驱动可以获取用户操作事件,比如传感器输入、摄像头输入等,并将事件转化成数据。The kernel layer (kernel) is the layer between hardware and software, which is used to provide core system services, such as security, memory management, process management, network protocol stack, driver model, etc. The kernel layer can include display controller drive, sensor driver, camera driver, audio driver, video driver, etc. The driver communicates with the hardware device through the bus, controls the hardware to enter various working states, obtains the value of the device-related registers, and obtains the state of the device. For example, the driver can obtain user operation events, such as sensor input, camera input, etc., and convert the events into data.
在本申请实施例中,软件和硬件可以协同工作,在头戴式显示设备200检测到用户移动,生成相应的图像进行显示。具体的,在一个示例中,当陀螺仪传感器180B,加速度传感器180C检测到用户头部转动的输入事件时,相应的硬件中断被发给内核层。内核层的传感器驱动获取到传感器输入数据。应用程序框架层的输入管理器从内核层获取传感器输入事件。然后窗口管理器可能需要更新应用窗口的大小或位置,活动管理器更新显示活动的生命周期。在窗口管理器将窗口的大小和位置、活动显示界面调整好之后,显示管理器会进行窗口区域中图像的刷新。在窗口活动更新之后,表面管理器会通过SurfaceFlinger服务,将GPU渲染后的图形数据按照顺序混合,生成图形缓冲流,并输出到帧缓冲区。帧缓冲区再将图形缓冲流发送至硬件抽象层的硬件合成器,硬件合成器对图形缓冲流进行最终的合成。硬件合成器将最终的图形数据送至内核层的显示驱动,由显示装置194进行显示。In the embodiment of the present application, the software and the hardware can work together, and when the head-mounted display device 200 detects the movement of the user, a corresponding image is generated for display. Specifically, in one example, when the gyro sensor 180B and the acceleration sensor 180C detect an input event of the user's head turning, a corresponding hardware interrupt is sent to the kernel layer. The sensor driver at the kernel layer obtains the sensor input data. The input manager at the application framework layer gets sensor input events from the kernel layer. The window manager may then need to update the size or position of the application window, and the activity manager updates the display activity's lifecycle. After the window manager has adjusted the size and position of the window and the active display interface, the display manager will refresh the image in the window area. After the window activity is updated, the surface manager will use the SurfaceFlinger service to mix the graphics data rendered by the GPU in order, generate a graphics buffer stream, and output it to the frame buffer. The frame buffer then sends the graphics buffer stream to the hardware compositor of the hardware abstraction layer, and the hardware compositor performs final compositing on the graphics buffer stream. The hardware compositor sends the final graphic data to the display driver of the kernel layer, and is displayed by the display device 194 .
以上对软件架构的介绍仅为示例,可以理解的是,本实施例示意的软件架构并不构成对本申请构成具体限定。在本申请另一些实施例中,电子设备100的软件架构可以包括比图示更多或更少的模块,或者组合某些模块,或者拆分某些模块,或者不同的架构布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。The above introduction to the software architecture is only an example, and it should be understood that the software architecture shown in this embodiment does not constitute a specific limitation to the present application. In some other embodiments of the present application, the software architecture of the electronic device 100 may include more or fewer modules than shown in the figure, or combine some modules, or split some modules, or arrange different architectures. The illustrated components can be realized in hardware, software or a combination of software and hardware.
图5示出了通信系统30,一些实施例中该通信系统30可以为分体机系统。该通信系统30可以包括多个智能终端设备,这多个终端设备之间建立有通信连接。如图3所示,在一些实施例中,通信系统30包括头戴式显示设备200和主机310,主机310可以示例为电脑A或手机A等图像处理能力较强的设备,不限于图3中示例的电脑A或手机A,主机310可以是其他图像处理性能较强的一个或多个设备,如服务器,主机310还可以是一个或多个云端设备,如云主机/云服务器,本实施例对此不作限制。头戴式显示设备200和主机310之间建立有第一连接。头戴式显示设备200主要用于显示图像,主机310主要提供图像的运算处理功能,然后通过第一连接发送给头戴式显示设备200进行显示,该第一连接可以是有线连接或无线连接,本实施例不作限制。在以下实施例中,终端设备也可简称为终端,终端设备通常是可以提供用户界面、可以与用户交互、为用户提供业务功能的智能电子设备。Fig. 5 shows a communication system 30, which may be a split system in some embodiments. The communication system 30 may include multiple intelligent terminal devices, and communication connections are established between the multiple terminal devices. As shown in FIG. 3 , in some embodiments, the communication system 30 includes a head-mounted display device 200 and a host 310. The host 310 can be, for example, a device with strong image processing capabilities such as a computer A or a mobile phone A, and is not limited to that shown in FIG. 3 In the example computer A or mobile phone A, the host 310 can be one or more devices with strong image processing performance, such as a server, and the host 310 can also be one or more cloud devices, such as cloud hosts/cloud servers. There is no limit to this. A first connection is established between the head-mounted display device 200 and the host 310 . The head-mounted display device 200 is mainly used for displaying images, and the host 310 mainly provides image calculation and processing functions, and then sends them to the head-mounted display device 200 for display through a first connection. The first connection may be a wired connection or a wireless connection. This embodiment is not limited. In the following embodiments, a terminal device may also be referred to simply as a terminal, and a terminal device is generally an intelligent electronic device that can provide a user interface, interact with a user, and provide a service function for the user.
通信系统30中的各终端设备上可以搭载
Figure PCTCN2022091722-appb-000006
系统、
Figure PCTCN2022091722-appb-000007
系统、
Figure PCTCN2022091722-appb-000008
Figure PCTCN2022091722-appb-000009
系统、
Figure PCTCN2022091722-appb-000010
系统(HarmonyOS,HOS)或者其他类型的操作系统,通信系统30中的各个终端设备的操作系统可以相同也可以不同,本申请对此不作限制。在一些实施例中,通信系统30中多个终端均搭载有
Figure PCTCN2022091722-appb-000011
系统,那么该多个终端组成的系统可以称为
Figure PCTCN2022091722-appb-000012
超级虚拟终端(super virtual device),亦可称为
Figure PCTCN2022091722-appb-000013
超级终端,指的是通过分布式技术将多个终端的能力进行整合,存放在一个虚拟的硬件资源池里,根据业务需要统一管理、调度和整合终端能力,来对外提供服务,使得不同终端之间实现快速连接、能力互助、资源共享。
Each terminal device in the communication system 30 can carry
Figure PCTCN2022091722-appb-000006
system,
Figure PCTCN2022091722-appb-000007
system,
Figure PCTCN2022091722-appb-000008
Figure PCTCN2022091722-appb-000009
system,
Figure PCTCN2022091722-appb-000010
system (HarmonyOS, HOS) or other types of operating systems, the operating systems of each terminal device in the communication system 30 may be the same or different, which is not limited in this application. In some embodiments, multiple terminals in the communication system 30 are equipped with
Figure PCTCN2022091722-appb-000011
system, then the system composed of multiple terminals can be called
Figure PCTCN2022091722-appb-000012
Super virtual device (super virtual device), also known as
Figure PCTCN2022091722-appb-000013
A hyper terminal refers to integrating the capabilities of multiple terminals through distributed technology, storing them in a virtual hardware resource pool, and uniformly managing, scheduling, and integrating terminal capabilities according to business needs to provide external services, so that different terminals Realize fast connection, mutual assistance, and resource sharing.
第一连接可以包括有线连接,如高清多媒体接口(high definition multimedia interface,HDMI)连接,显示接口(display port,DP)连接等,第一连接也可以包括无线连接,如蓝牙(bluetooth,BT)连接、无线保真(wireless fidelity,Wi-Fi)连接、热点连接等,实现头 戴式显示设备200和主机310之间在同账号、无账号或异账号情况下通信。第一连接还可以为互联网Internet连接,在一些实施例中,头戴式显示设备200和主机310可以登录同一个账号,从而通过互联网实现连接并通信。当然,多个终端也可以登录不同账号,但通过绑定的方式进行连接。例如,头戴式显示设备200和主机310可以登录不同的账号,主机310在设备管理应用中,设置将头戴式显示设备200和自己进行绑定,之后通过该设备管理应用来连接。本申请实施例对第一连接的类型不作限制,通信系统30中各个终端之间可通过各种通信连接类型进行数据的传输和交互。此外,各个终端之间也可以结合上述任意几种方式来连接并通信,本申请实施例对此不做限制。The first connection may include a wired connection, such as a high definition multimedia interface (high definition multimedia interface, HDMI) connection, a display port (DP) connection, etc., and the first connection may also include a wireless connection, such as a bluetooth (BT) connection , wireless fidelity (wireless fidelity, Wi-Fi) connection, hotspot connection, etc., to realize communication between the head-mounted display device 200 and the host 310 under the same account, no account or different account. The first connection can also be an Internet connection. In some embodiments, the head-mounted display device 200 and the host 310 can log in the same account, so as to realize connection and communication through the Internet. Of course, multiple terminals can also log in to different accounts, but they are connected through binding. For example, the head-mounted display device 200 and the host 310 can log in to different accounts, and the host 310 sets to bind the head-mounted display device 200 with itself in the device management application, and then connects through the device management application. The embodiment of the present application does not limit the type of the first connection, and the terminals in the communication system 30 can perform data transmission and interaction through various types of communication connections. In addition, terminals may also be connected and communicate in combination with any of the above methods, which is not limited in this embodiment of the present application.
相应的,各终端设备中可以配置有移动通信模块和无线通信模块用于通信。移动通信模块可以提供应用在终端上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块可以包括蓝牙(bluetooth,BT)模块和/或无线局域网络(wireless local area networks,WLAN)模块等。其中,蓝牙模块可以提供包括经典蓝牙(蓝牙2.1)或蓝牙低功耗(bluetooth low energy,BLE)中一项或多项蓝牙通信的解决方案,WLAN模块可以提供包括无线保真点对点连接(wireless fidelity peer-to-peer,Wi-Fi P2P)、无线保真局域网(wireless fidelity local area networks,Wi-Fi LAN)或无线保真软件接入点(wireless fidelity software access point,Wi-Fi softAP)中一项或多项WLAN通信的解决方案。在一些实施例中,Wi-Fi P2P是指允许无线网络中的设备无需通过无线路由器即可以点对点形式相互连接,在
Figure PCTCN2022091722-appb-000014
系统中又可称为无线保真直连(wireless fidelity direct,Wi-Fi direct)。建立Wi-Fi P2P连接的设备之间可以在不连接网络或热点的情况下,直接通过Wi-Fi(必须处于同一频段)进行数据交换,实现点对点的通信,如传输文件、图片、视频等数据。相对于蓝牙,Wi-Fi P2P具有搜索速度和传输速度更快、传输距离更远等优点。
Correspondingly, each terminal device may be configured with a mobile communication module and a wireless communication module for communication. The mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal. The wireless communication module may include a Bluetooth (bluetooth, BT) module and/or a wireless local area network (wireless local area networks, WLAN) module and the like. Among them, the Bluetooth module can provide one or more Bluetooth communication solutions including classic Bluetooth (Bluetooth 2.1) or Bluetooth low energy (Bluetooth low energy, BLE), and the WLAN module can provide wireless fidelity point-to-point connections (wireless fidelity peer-to-peer, Wi-Fi P2P), wireless fidelity local area networks (wireless fidelity local area networks, Wi-Fi LAN) or wireless fidelity software access point (wireless fidelity software access point, Wi-Fi softAP) One or more WLAN communication solutions. In some embodiments, Wi-Fi P2P refers to allowing devices in a wireless network to connect to each other in a point-to-point manner without going through a wireless router.
Figure PCTCN2022091722-appb-000014
In the system, it may also be called wireless fidelity direct (Wi-Fi direct). Devices that establish a Wi-Fi P2P connection can exchange data directly through Wi-Fi (must be in the same frequency band) without connecting to a network or hotspot, and realize point-to-point communication, such as transferring files, pictures, videos and other data . Compared with Bluetooth, Wi-Fi P2P has the advantages of faster search speed and transmission speed, and longer transmission distance.
需要说明的是,图3所示的通信系统30仅用于辅助描述本申请实施例提供的技术方案,并不对本申请实施例构成限制。在实际业务场景中,通信系统30可以包括更多或更少的终端设备,比如说还包括手持控制器等本申请对终端类型、终端数量、连接方式等不作任何限定。比如说在PC投屏手机的场景中,通信系统30包括PC和手机。It should be noted that the communication system 30 shown in FIG. 3 is only used to assist in describing the technical solution provided by the embodiment of the present application, and does not limit the embodiment of the present application. In an actual business scenario, the communication system 30 may include more or fewer terminal devices, such as handheld controllers, etc. This application does not make any limitations on terminal types, terminal quantities, connection methods, etc. For example, in the scenario of PC mirroring a mobile phone, the communication system 30 includes a PC and a mobile phone.
头戴式显示设备在使用过程中不可避免的会存在MTP时延。MTP时延指的是从用户头部运动到头戴式显示设备显示屏上相应显示的新图像的光学信号映射到人眼为止的一个周期里所产生的延迟时长总和,包括传感器检测到用户运动、中央处理器(central processing unit,CPU)运行应用、图形处理器(graphics processing unit,GPU)计算所呈现的图像、渲染并送显等一系列时间。There will inevitably be an MTP delay during the use of the head-mounted display device. MTP delay refers to the sum of the delay time generated in a cycle from the user's head movement to the optical signal of the new image correspondingly displayed on the display screen of the head-mounted display device is mapped to the human eye, including the sensor detecting the user's motion 1. The central processing unit (central processing unit, CPU) runs the application, and the graphics processing unit (graphics processing unit, GPU) calculates the presented image, renders and sends a series of time.
比如,在AR/VR或MR场景下,从用户运动到用户看到相应的画面,可以经过多个模块,如图6所示:For example, in AR/VR or MR scenarios, from the user's movement to the user seeing the corresponding screen, multiple modules can be passed through, as shown in Figure 6:
(1)传感器检测模块。(1) Sensor detection module.
传感器用于检测用户运动信息和/或外界环境信息,保证画面视角跟踪身体动作随时切换。当用户运动时,头戴式显示设备或手持体感控制器上的传感器(如相机、陀螺仪、加速仪、地磁仪等)可以灵敏捕捉到用户的运动,并实时生成传感器数据,如图像数据、加速度数据、角速度数据等。The sensor is used to detect user movement information and/or external environment information, so as to ensure that the viewing angle of the screen can be switched at any time while tracking body movements. When the user moves, the sensors (such as cameras, gyroscopes, accelerometers, magnetometers, etc.) on the head-mounted display device or the handheld somatosensory controller can sensitively capture the user's movement and generate sensor data in real time, such as image data, Acceleration data, angular velocity data, etc.
(2)运动追踪算法模块。(2)Motion tracking algorithm module.
在一些实施例中,处理器可以将上一步传感器生成的传感器数据作为输入,通过数据融合与解算可以得到用户实时的位姿数据。位姿指的是一个人或物体的位置(position)和方向 (orientation),包括姿态和朝向。In some embodiments, the processor can take the sensor data generated by the sensor in the previous step as input, and obtain the user's real-time pose data through data fusion and calculation. Pose refers to the position and orientation of a person or object, including posture and orientation.
当用户的位姿发生变化时,用户的视角随之发生变化。具体的,位姿可以为用户的头部位姿。位姿可以通过头戴式显示设备中的传感器和/或摄像头获取。When the user's pose changes, the user's viewing angle changes accordingly. Specifically, the pose may be the head pose of the user. The pose can be obtained through sensors and/or cameras in the head-mounted display device.
如果头戴式显示设备为3自由度(degree of freedom,dof)设备,那么输出的位姿数据包括旋转(X轴、Y轴、Z轴方向的旋转)对应的四元数(quaternion)数据;如果头戴式显示设备为6dof设备,那么输出的位姿数据包括旋转(X轴、Y轴、Z轴方向的旋转)对应的四元数数据和三轴位置(上下、前后、左右的移动)数据。其中,四元数可以用来表示三维空间的旋转(rotation)和定向(orientation),例如四元数q=((vx,vy,vz),w)=(v,w),其中v是向量,w是实数,q可以表示以向量v=(vx,vy,vz)为轴旋转w角度的旋转操作(适用右手法则)。If the head-mounted display device is a 3-degree-of-freedom (dof) device, the output pose data includes quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions); If the head-mounted display device is a 6dof device, the output pose data includes the quaternion data corresponding to the rotation (rotation in the X-axis, Y-axis, and Z-axis directions) and the three-axis position (up-down, front-back, left-right movement) data. Among them, the quaternion can be used to represent the rotation (rotation) and orientation (orientation) of the three-dimensional space, for example, the quaternion q=((vx,vy,vz),w)=(v,w), where v is a vector , w is a real number, and q can represent a rotation operation of rotating an angle w around the axis of vector v=(vx, vy, vz) (the right-hand rule applies).
(3)图像渲染模块。(3) Image rendering module.
根据位姿数据和对应位姿下的待渲染图像,渲染模块将图像进行反畸变等处理后,送至GPU等处理器进行绘制,进行坐标变换、视图变换、图层渲染、纹理合成、着色、裁剪、光栅化等渲染流程。如果是VR穿戴设备,则可以分别渲染左、右眼对应的两张图像。According to the pose data and the image to be rendered under the corresponding pose, the rendering module performs anti-distortion and other processing on the image, and then sends it to a processor such as GPU for drawing, coordinate transformation, view transformation, layer rendering, texture synthesis, coloring, Cropping, rasterization and other rendering processes. If it is a VR wearable device, two images corresponding to the left and right eyes can be rendered respectively.
在一种实施例中,用户佩戴VR穿戴设备时,可能会发生位置移动、扭头等行为,为了使得虚拟环境更加真实,当VR穿戴设备发生位置移动、扭头等行为时,需要对图像进行相应的处理,给用户真实的感受。因此,在VR领域中,图像渲染包括对图像进行色彩、透明度等渲染,还包括根据VR穿戴设备检测到的人体位姿数据对图像进行旋转和/或平移的渲染。其中,VR穿戴设备检测的位姿包括旋转角度和/或平移距离等多个自由度,其中,旋转角度包括偏航角、俯仰角、横滚角,平移距离包括相对于在三轴方向(X轴,Y轴,Z轴)的平移距离。因此,图像渲染包括根据VR穿戴设备的旋转角度对图像进行旋转处理,和/或,根据VR穿戴设备的平移距离对图像进行平移处理。In one embodiment, when a user wears a VR wearable device, behaviors such as position movement and head turning may occur. In order to make the virtual environment more realistic, when the VR wearable device has position movement, head turning and other behaviors, it is necessary to perform corresponding adjustments on the image. Processing, to give users a real feeling. Therefore, in the field of VR, image rendering includes rendering images such as color and transparency, and also includes rendering images that are rotated and/or translated based on human body pose data detected by VR wearable devices. Among them, the posture detected by the VR wearable device includes multiple degrees of freedom such as rotation angle and/or translation distance, where the rotation angle includes yaw angle, pitch angle, and roll angle, and the translation distance includes relative to the three-axis direction (X axis, Y axis, Z axis) translation distance. Therefore, image rendering includes performing rotation processing on the image according to the rotation angle of the VR wearable device, and/or performing translation processing on the image according to the translation distance of the VR wearable device.
例如,当用户佩戴VR穿戴设备朝前时,经渲染后的图像上对象(例如山、水等)位于正前方;当用户头部姿态向右旋转角度(比如40度)后,图像上对象(例如山、水等)向左旋转40度。这样,用户看到的图像是与用户运动、用户视角联动的,体验较好。For example, when the user wears a VR wearable device facing forward, the rendered image objects (such as mountains, water, etc.) For example, mountains, water, etc.) rotate 40 degrees to the left. In this way, the image seen by the user is linked with the user's movement and user's perspective, and the experience is better.
(4)图像送显模块。(4) Image sending module.
图像送显模块用于将GPU等处理器渲染完成后的图像数据(多个像素点数据)发送至显示装置。在一些实施例中,显示装置包括帧缓冲区(frame buffer)。其中,帧缓冲区又可称为显存,用于存储GPU处理过或者即将提取的图像渲染数据。电子设备将渲染后的数据发送至显示装置可以是指发送至显示装置中的帧缓冲区。在另一些实施例中,显示装置可以没有帧缓冲区,电子设备将渲染后的数据发送至显示装置可以是指将图像数据(多个像素点数据)直接发送至显示器件上进行显示。The image sending and display module is used to send the image data (multiple pixel data) rendered by processors such as GPU to the display device. In some embodiments, the display device includes a frame buffer. Among them, the frame buffer can also be called a video memory, and is used to store image rendering data processed by the GPU or to be extracted soon. Sending the rendered data to the display device by the electronic device may refer to sending to the frame buffer in the display device. In other embodiments, the display device may not have a frame buffer, and sending the rendered data to the display device by the electronic device may refer to directly sending image data (multiple pixel data) to the display device for display.
GPU将图像渲染数据提交到帧缓冲区后,配合屏幕刷新的垂直同步(vertical sync,Vsync)信号,视频控制器在Vsync信号之后规定的时间点去提取帧缓冲区的图像数据,然后显示屏等光学显示器件接到显示信号后,提取缓冲区的帧,显示到屏幕上。After the GPU submits the image rendering data to the frame buffer, in conjunction with the vertical sync (Vsync) signal of the screen refresh, the video controller extracts the image data of the frame buffer at a specified time point after the Vsync signal, and then the display screen, etc. After the optical display device receives the display signal, it extracts the buffer frame and displays it on the screen.
显示屏用于显示屏幕画面时,是对屏幕从左到右、从上到下逐行扫描顺序显示像素点,一行扫描完会发送一个水平同步(horizontal sync,Hsync)信号,一页都扫描完就显示一帧画面并发出一个Vsync信号,并开始下一页的扫描。When the display screen is used to display screen images, it scans the screen from left to right and from top to bottom to display pixels sequentially. After one line is scanned, a horizontal sync (horizontal sync, Hsync) signal is sent, and one page is scanned. Just display a frame of picture and send out a Vsync signal, and start the scanning of the next page.
Vsync信号是一个脉冲信号,一般由硬件时钟生成,也可以由软件(如硬件合成器HWC)模拟生成。The Vsync signal is a pulse signal, which is generally generated by a hardware clock, and can also be simulated by software (such as a hardware synthesizer HWC).
所以,通常情况下,当开启Vsync机制后,电子设备会等待Vsync信号发出后,进行新 的一帧图像的渲染和帧缓冲区的更新,用于解决显示画面撕裂的现象,增加了画面流畅度。如果当前帧缓冲区的数据没有完全更新,还保留有上一帧的数据,那么屏幕在刷新时,从帧缓冲区抓取的帧来自不同的帧,出现画面撕裂感。Therefore, under normal circumstances, when the Vsync mechanism is turned on, the electronic device will wait for the Vsync signal to be sent, and then render a new frame of image and update the frame buffer to solve the phenomenon of tearing the display screen and increase the smoothness of the screen. Spend. If the data of the current frame buffer is not completely updated and the data of the previous frame is still retained, then when the screen is refreshed, the frames captured from the frame buffer come from different frames, resulting in a sense of screen tearing.
不同类型显示屏的显示原理不尽相同,常见的显示屏类型包括:液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)、等离子显示屏(plasma display panel,PDP)等等。例如,对于LCD显示屏来说,是通过改变电场(电压),使得液晶分子的排列发生变化,从而使外光源(入射光束)透过液晶的透光率改变,再通过调整彩色滤光片(红、绿、蓝三基色滤光膜),完成色彩显示。对于OLED显示屏来说,是在正负电极之间夹上有机发光层,在电场作用下,阳极产生的空穴和阴极产生的电子发生移动,分别向空穴传输层和电子传输层注入,迁移到发光层,当空穴和电子在发光层相遇时,产生能量激子,从而激发发光分子最终产生可见光,再加上彩色滤光片,就完成色彩显示。不同类型显示屏的显示原理、硬件结构不尽相同,这里不再赘述。不同类型的显示屏不对本申请实施例构成限制。Different types of display screens have different display principles. Common display screen types include: liquid crystal display (liquid crystal display, LCD), organic light emitting diode (organic light emitting diode, OLED), plasma display panel (plasma display panel, PDP), etc. Wait. For example, for an LCD display, the arrangement of liquid crystal molecules is changed by changing the electric field (voltage), so that the light transmittance of the external light source (incident light beam) through the liquid crystal is changed, and then by adjusting the color filter ( Red, green, blue three primary color filter film), to complete the color display. For an OLED display, an organic light-emitting layer is sandwiched between the positive and negative electrodes. Under the action of an electric field, the holes generated by the anode and the electrons generated by the cathode move and are injected into the hole transport layer and the electron transport layer respectively. Migrating to the light-emitting layer, when the holes and electrons meet in the light-emitting layer, energy excitons are generated, thereby exciting the light-emitting molecules to finally generate visible light, and then color filters are added to complete the color display. The display principles and hardware structures of different types of display screens are not the same, and will not be repeated here. Different types of display screens do not limit this embodiment of the application.
下面结合图7,简单介绍一下一些实施例中的图像显示过程。The following briefly introduces the image display process in some embodiments with reference to FIG. 7 .
如图7所示,一个图像的绘制一般需要CPU和GPU协同完成。例如,CPU可以负责计算图像相关的显示内容,比如视图的创建、布局计算、图片解码、文本绘制、位姿数据计算、纹理数据计算等,随后CPU会将计算好的内容传给GPU,由GPU进行变换、合成、渲染等操作。As shown in Figure 7, the drawing of an image generally requires the cooperation of the CPU and the GPU. For example, the CPU can be responsible for calculating image-related display content, such as view creation, layout calculation, image decoding, text drawing, pose data calculation, texture data calculation, etc., and then the CPU will pass the calculated content to the GPU. Perform transformations, compositing, rendering, and more.
GPU可以用来负责图层渲染、纹理合成、着色器渲染等,可以包括:由顶点着色器处理顶点数据,比如平移、缩放、旋转,进行各种模型变换、视变换、投影变换等操作,将其转换为标准化坐标;由细分着色器和几何着色器描绘物体的形状,处理模型的几何体,使其更加平顺;把处理完的数据进行图元装配,使数据变成图元;根据视口对图元进行剪切,将视口看不见的地方剪裁掉;对图元进行光栅化处理,生成显示屏的坐标及像素片元;片元着色器对像素片元进行着色;在片元着色器处理后进行纹理混合等渲染过程,最后得到图像的像素点数据。GPU can be used to be responsible for layer rendering, texture synthesis, shader rendering, etc., which can include: processing vertex data by vertex shader, such as translation, scaling, rotation, various model transformation, viewing transformation, projection transformation and other operations, will It is converted into standardized coordinates; the shape of the object is depicted by the subdivision shader and the geometry shader, and the geometry of the model is processed to make it smoother; the processed data is assembled into primitives to make the data into primitives; according to the viewport Cut the primitives and cut out the invisible parts of the viewport; rasterize the primitives to generate the coordinates of the display screen and pixel fragments; the fragment shader colors the pixel fragments; color the fragments After processor processing, texture mixing and other rendering processes are performed, and finally the pixel data of the image is obtained.
GPU会将图像的像素点数据发送到帧缓冲区中。The GPU will send the pixel data of the image to the frame buffer.
视频控制器根据Vsync信号,在指定时间去提取帧缓冲区的图像帧的像素数据,然后交由显示屏显示该帧图像。According to the Vsync signal, the video controller extracts the pixel data of the image frame in the frame buffer at a specified time, and then displays the frame image on the display screen.
通常每帧图像的绘制、渲染和送显过程需要配合Vsync信号,在每个Vsync周期内,一帧完整图像需要渲染好并发送到帧缓冲区中,否则就会产生显示画面卡顿、掉帧、撕裂等情况。Usually, the drawing, rendering and displaying process of each frame of image needs to cooperate with the Vsync signal. In each Vsync cycle, a complete frame of image needs to be rendered and sent to the frame buffer, otherwise the display screen will freeze and drop frames. , tearing, etc.
(5)数据传输模块。(5) Data transmission module.
在一些实施例中,AR/VR或MR设备为分体式结构时,如通信系统30中,还可以包括数据传输模块,数据传输模块可以用于通过有线连接或无线连接发送或接收图像相关的数据。比如在一个示例中,用户佩戴头戴式显示设备200时头部移动,传感器模块捕捉到用户头部运动,生成传感器数据,由数据传输模块发送给主机310,主机310中的运动追踪算法模块对传感器数据进行处理后,生成位姿等数据,再通过数据传输模块发送给头戴式显示设备200,头戴式显示设备200中的图像渲染模块可以对位姿等数据进行渲染,然后将渲染数据送至帧缓存区,最后显示在显示屏上。在另一个示例中,渲染模块可以位于主机310上,用户佩戴头戴式显示设备200时头部移动,传感器模块捕捉到用户头部运动,生成传感器数据,由数据传输模块发送给主机310,主机310中的运动追踪算法模块对传感器数据进行处理后,生 成位姿等数据,主机310中的图像渲染模块可以对位姿数据等进行渲染,再通过数据传输模块将渲染后的数据发送给头戴式显示设备200,最终由送显模块提取渲染后的图像数据,交由显示屏显示。In some embodiments, when the AR/VR or MR device has a split structure, such as in the communication system 30, it may also include a data transmission module, which can be used to send or receive image-related data through a wired connection or a wireless connection . For example, in one example, when the user wears the head-mounted display device 200 and the head moves, the sensor module captures the movement of the user's head and generates sensor data, which is sent to the host 310 by the data transmission module, and the motion tracking algorithm module in the host 310 After the sensor data is processed, data such as pose and pose are generated, and then sent to the head-mounted display device 200 through the data transmission module. The image rendering module in the head-mounted display device 200 can render the data such as pose and pose, and then render the rendered data sent to the frame buffer and finally displayed on the display. In another example, the rendering module can be located on the host 310, when the user wears the head-mounted display device 200 and the head moves, the sensor module captures the movement of the user's head, generates sensor data, and sends it to the host 310 by the data transmission module. The motion tracking algorithm module in 310 processes the sensor data to generate pose and other data, and the image rendering module in the host 310 can render the pose data, etc., and then send the rendered data to the headset through the data transmission module. In the display device 200, the rendered image data is finally extracted by the display sending module and displayed on the display screen.
因此,从用户运动到用户最终观看到相应图像,需要经过传感器数据采集、图像采集、图像传输、图像渲染、图像送显等多个环节,每个环节都会耗费一定的时间。当MTP时延较大时,用户视觉接受的自身的身体状态,与负责感知身体状态中的中耳前庭器官不一致,即视觉观察到的画面比身体感觉到的慢,两者产生冲突而使得用户产生晕眩。业界公认的研究表明,当MTP时延被控制在20ms以下时,可以大幅度降低VR/AR或MR场景下人体的眩晕反应。Therefore, from the user's movement to the user's final viewing of the corresponding image, it needs to go through multiple links such as sensor data collection, image collection, image transmission, image rendering, and image display. Each link will take a certain amount of time. When the MTP delay is large, the user's own body state visually accepted is inconsistent with the middle ear vestibular organ responsible for perceiving the body state, that is, the visually observed picture is slower than the body's sensed, and the conflict between the two causes the user Causes dizziness. Industry-recognized research shows that when the MTP delay is controlled below 20 ms, the vertigo response of the human body in VR/AR or MR scenarios can be greatly reduced.
因此,通过降低头戴式显示设备的MTP时延,可以提高人机交互性能,从而降低用户的晕眩、呕吐等不良反应,提升用户使用体验。Therefore, by reducing the MTP delay of the head-mounted display device, the human-computer interaction performance can be improved, thereby reducing the user's dizziness, vomiting and other adverse reactions, and improving the user experience.
在一些降低MTP时延的解决方案中,可以对各个环节进行优化,使每一个环节的用时减少,如,提高图像获取的帧率、提供预测的位姿、提升GPU的渲染性能、将显示屏的刷新率提高到75Hz以上等。这些方案通常是基于串行流程实现,且对于每一个步骤中处理的数据单元都是一整副图像。In some solutions to reduce MTP latency, each link can be optimized to reduce the time spent on each link, such as increasing the frame rate of image acquisition, providing predicted poses, improving the rendering performance of the GPU, and turning the display The refresh rate is increased to above 75Hz and so on. These schemes are usually implemented based on a serial process, and the data unit processed in each step is an entire image.
基于图6所示的流程模块,在一个示例中,可以将MTP时延分解到各个环节。图8示出了各模块的耗时分析。如图8所示:Based on the process modules shown in FIG. 6 , in an example, the MTP delay can be decomposed into various links. Figure 8 shows the time-consuming analysis of each module. As shown in Figure 8:
(1)传感器检测模块:传感器检测模块用于检测用户运动信息和/或外界环境信息。这里以传感器为相机和IMU为例进行说明。一般图像产生的频率为30Hz,假设一帧图像的曝光时间为10ms,图像的时间戳打在图像曝光的中间,那么图像的时间戳与其最终成像之间有5ms的时延。对于传感器IMU来说,它的频率为1000Hz,可以认为IMU的产生几乎没有时延。而由于图像的数据量较大,图像从生成到传送至处理器也存在一定的时延,假设为5ms,那么传感器数据产生至传输完成的总耗时需要10ms。(1) Sensor detection module: The sensor detection module is used to detect user movement information and/or external environment information. Here, the sensor is a camera and IMU as an example for illustration. Generally, the frequency of image generation is 30Hz. Assuming that the exposure time of a frame of image is 10ms, and the time stamp of the image is in the middle of the image exposure, there is a 5ms delay between the time stamp of the image and its final imaging. For the sensor IMU, its frequency is 1000Hz, and it can be considered that the generation of the IMU has almost no delay. However, due to the large amount of image data, there is also a certain delay between image generation and transmission to the processor. Assuming it is 5ms, the total time from sensor data generation to transmission completion needs 10ms.
(2)运动追踪算法模块:处理器可以基于传感器数据,计算用户的位姿数据。假设此步骤需要耗时5ms完成解算,当然,在此模块中,我们可以通过一些优化的方法减少计算的耗时,甚至可以通过预测位姿来减少整体的MTP时延,但是暂时只考虑实时的位姿解算,即在这个模块中,存在5ms的延时。(2) Motion tracking algorithm module: The processor can calculate the user's pose data based on the sensor data. Assuming that this step takes 5ms to complete the calculation, of course, in this module, we can reduce the time-consuming calculation through some optimization methods, and even reduce the overall MTP delay by predicting the pose, but only consider real-time for the time being The pose calculation of , that is, in this module, there is a delay of 5ms.
(3)图像渲染模块:根据位姿数据和对应位姿下的待渲染图像,GPU等处理器可以对图像进行渲染。假设最终屏幕刷新率为90Hz,那么每帧渲染的时间必须要小于1s/90Hz=11.1ms,此处我们假设渲染耗时8ms。(3) Image rendering module: According to the pose data and the image to be rendered under the corresponding pose, processors such as GPU can render the image. Assuming that the final screen refresh rate is 90Hz, the rendering time of each frame must be less than 1s/90Hz=11.1ms. Here we assume that rendering takes 8ms.
(4)图像送显模块:图像送显模块用于将渲染完成后的图像帧数据发送至显示装置。由于最终送显的图像数据量较大,假设传输送显需要8ms。(4) Image sending and display module: the image sending and display module is used to send the rendered image frame data to the display device. Due to the large amount of image data finally sent to the display, it is assumed that it takes 8ms for the transmission to be sent to the display.
在一种实施例中,上述四个步骤串行进行的总耗时为31ms,即从用户头动到图像最终显示的MTP时延为31ms。表1示出了对应的AR场景下MTP时延各步骤耗时分析。In one embodiment, the total time spent on performing the above four steps in series is 31 ms, that is, the MTP delay from the user's head movement to the final display of the image is 31 ms. Table 1 shows the time-consuming analysis of each step of the MTP delay in the corresponding AR scenario.
Figure PCTCN2022091722-appb-000015
Figure PCTCN2022091722-appb-000015
表1Table 1
将上述传感器数据的生成及位姿的计算,即传感器检测模块和运动追踪算法模块所完成的工作合称为图像获取的阶段,图像获取之后进行图像渲染和图像显示,在这里可以将之前图8、表1所述的步骤简化为三个阶段:图像获取、图像渲染、图像显示,分别对应耗时为15ms、8ms、8ms,如图9所示。The generation of the above sensor data and the calculation of the pose, that is, the work done by the sensor detection module and the motion tracking algorithm module are collectively referred to as the stage of image acquisition. After image acquisition, image rendering and image display can be performed. Here, the previous figure 8 can be used . The steps described in Table 1 are simplified into three stages: image acquisition, image rendering, and image display, respectively corresponding to 15ms, 8ms, and 8ms time-consuming, as shown in FIG. 9 .
在另一些实施例中,比如在分体机的应用场景中,在主机与头戴式显示设备之间的数据传输也会耗费一定的时间,在一些实施例中,数据传输的耗时也可以算入图像获取阶段的耗时中。In other embodiments, for example, in the application scenario of a split machine, the data transmission between the host and the head-mounted display device will also take a certain amount of time. In some embodiments, the time-consuming data transmission can also be Included in the time spent in the image acquisition phase.
上述实施例中从图像获取、图像渲染到图像送显的流程为串行实现,各环节的最小图像单元为一整副图像,时间存在冗余,可以进一步优化。In the above embodiments, the process from image acquisition, image rendering to image display is implemented serially, and the minimum image unit of each link is a whole image, and there is redundancy in time, which can be further optimized.
在另一些实施例中,本申请提供了一种图像传输与显示方法,可以通过将一整副图像分割为多个部分,将这多个部分进行并行图像渲染和/或图像并行送显的处理,和/或在多个设备间并行传输这多个部分图像,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输、显示的延迟时间。具体的描述请参见后文。In some other embodiments, the present application provides an image transmission and display method, which can divide a whole set of images into multiple parts, and perform parallel image rendering and/or parallel image display processing on these multiple parts , and/or transmit the multiple partial images in parallel between multiple devices, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further reducing the delay time of image transmission and display. For a detailed description, please refer to the following text.
实施本申请提供的方法,可以降低图像传输及显示过程中的延迟时间,图像可以更快的显示。进一步的,在AR/VR或MR场景下,可以降低MTP时延,有效缓解用户眩晕、呕吐等晕动症的不良反应,提升用户使用体验。可以理解的,本申请提供的方法可适用于更多的场景中,例如PC投屏显示图像、车载设备显示图像等场景,通过分片传输、渲染及送显的并行处理方式,加快端到端的显示过程,减少时延,提升用户的观看体验。The implementation of the method provided in this application can reduce the delay time in the image transmission and display process, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
本申请还提供了相关电子设备或系统,可以应用本申请提供的图像传输与显示方法。其中,电子设备可以包括前述头戴式显示设备,该头戴式显示设备可以实现AR、VR、MR等显示效果。不限于此,本申请实施例涉及的电子设备还可以是包含显示屏的其他设备,如手机、PAD、PC、智能电视、车载设备等终端设备,本申请对电子设备的具体类型不作任何限制。前述示例的通信系统30也不对本申请其他实施例构成任何限制。The present application also provides related electronic equipment or systems, which can apply the image transmission and display method provided in the present application. Wherein, the electronic device may include the aforementioned head-mounted display device, and the head-mounted display device may realize display effects such as AR, VR, and MR. Not limited thereto, the electronic device involved in the embodiment of the present application may also be other devices including display screens, such as terminal devices such as mobile phones, PADs, PCs, smart TVs, and vehicle-mounted devices. This application does not impose any restrictions on the specific types of electronic devices. The communication system 30 in the foregoing example does not impose any limitation on other embodiments of the present application.
可以理解的,本申请提供的各实施例主要以VR场景下电子设备为头戴式显示设备为例进行介绍,但是本申请实施例提供的图像传输与显示方法并不限制应用于头戴式显示设备及VR场景,更一般地,各类型电子设备(如手机、PC、PAD、车载设备、游戏机、智能穿戴设备、智能家居设备、物联网设备等等)传输显示图像都可以普适本申请提供的图像传输与显示方法。It can be understood that the various embodiments provided by this application are mainly introduced by taking the electronic device as a head-mounted display device in a VR scene as an example, but the image transmission and display methods provided by the embodiments of this application are not limited to be applied to head-mounted display devices Devices and VR scenes, more generally, the transmission and display images of various types of electronic devices (such as mobile phones, PCs, PADs, vehicle-mounted devices, game consoles, smart wearable devices, smart home devices, Internet of Things devices, etc.) can be universally applicable to this application Provided image transmission and display methods.
本申请实施例提供的图像传输与显示的方法可以应用于多种业务场景,包括但不限于:The image transmission and display method provided by the embodiment of this application can be applied to various business scenarios, including but not limited to:
(1)AR、VR、MR场景(1) AR, VR, MR scenes
如前述图2所示的场景,用户可以佩戴头戴式显示设备,头戴式显示设备可以利用VR、AR、MR等技术显示图像,使用户感受到虚拟环境,为用户提供VR/AR/MR体验。在传统串行传输图像的情形下,MTP时延较大,容易引起用户的眩晕、呕吐等不良反应。使用本申请提供的分片图像传输与显示的方法,VR/AR/MR图像可以更快的显示,MTP时延降低,有效缓解用户眩晕、呕吐等晕动症的不良反应,提升用户使用体验。As shown in the above-mentioned scene in Figure 2, the user can wear a head-mounted display device, and the head-mounted display device can use VR, AR, MR and other technologies to display images, so that users can feel the virtual environment and provide users with VR/AR/MR experience. In the case of traditional serial image transmission, the MTP delay is relatively large, which may easily cause adverse reactions such as dizziness and vomiting of users. Using the sliced image transmission and display method provided by this application, VR/AR/MR images can be displayed faster, and the MTP delay is reduced, effectively alleviating the user's dizziness, vomiting and other adverse reactions of motion sickness, and improving the user experience.
(2)手机与PC或大屏之间投屏的场景(2) Screen projection between mobile phone and PC or large screen
比如手机与PC或大屏在建立连接的情况下,手机界面投屏在PC或大屏显示屏上,使用本申请提供的分片图像传输与显示的方法,PC或大屏可以更快速的将手机界面显示出来,给用户更加顺畅的投屏体验。For example, when a connection is established between a mobile phone and a PC or a large screen, and the screen of the mobile phone is projected on the PC or large screen display, using the method for transmitting and displaying segmented images provided by this application, the PC or large screen can more quickly transfer the The mobile phone interface is displayed, giving users a smoother screencasting experience.
(3)车载设备显示场景(3) Vehicle equipment display scene
比如智能车载显示屏,可以根据用户操作、车上传感器或其他输入信息作出反馈,生成 新的图像,使用本申请提供的分片图像传输与显示的方法,车载显示屏可以更快的显示相应的图像,给用户更加快速、舒适的行车体验。For example, a smart car display can generate new images based on feedback from user operations, on-board sensors, or other input information. Using the method for segmented image transmission and display provided by this application, the car display can display corresponding images more quickly. images, giving users a faster and more comfortable driving experience.
以上所描述的场景仅为示例说明,不对本申请其他实施例构成任何限制。不限于上述场景,本申请实施例提供的图像传输与显示的方法可以应用于其他社交类场景、办公类场景、购物类场景等任何需要显示图像的场景。The scenarios described above are only examples for illustration, and do not constitute any limitation to other embodiments of the present application. Not limited to the above scenarios, the image transmission and display method provided by the embodiment of the present application can be applied to other social scenarios, office scenarios, shopping scenarios and other scenarios that require image display.
下面介绍本申请实施例提供的图像传输与显示技术方案的具体实现过程,以下的方案可以应用于上面提到的多种业务场景。The following describes the specific implementation process of the image transmission and display technical solution provided by the embodiment of the present application. The following solution can be applied to various business scenarios mentioned above.
本申请提供的图像传输与显示技术方案,是通过将一整副图像分割为多个部分进行传输、渲染及送显的并行处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输、显示的延迟时间。在AR/VR或MR场景下,可以降低MTP时延,有效缓解用户眩晕、呕吐等晕动症的不良反应,提升用户使用体验。The image transmission and display technology solution provided by this application is to optimize the display path by dividing a whole image into multiple parts for parallel processing of transmission, rendering, and display, thereby reducing the waiting time in the process of image transmission and display Time, further reducing the delay time of image transmission and display. In AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience.
相比于图9所示的图像传输与显示流程,本申请技术方案的图像获取阶段的时间不变,而在图像渲染和图像送显阶段,最小处理单元不再是一整张图像,而是将图像分为多个部分,这多个部分并行错开进行渲染和送显。Compared with the image transmission and display process shown in Figure 9, the time of the image acquisition stage of the technical solution of the present application remains unchanged, while in the image rendering and image display stages, the smallest processing unit is no longer a whole image, but The image is divided into multiple parts, which are rendered and sent to the display in parallel and staggered.
图10示意了一些实施例提供的并行渲染与送显的流程。Fig. 10 illustrates the process of parallel rendering and displaying provided by some embodiments.
经过传感器数据的获取和处理器计算位姿数据等过程,头戴式电子设备200获取到一整副图像的图像数据用于渲染。如图11所示,假设将第一图像的一整副图像从上到下横向划分为四份,分别为第一片Slice1、第二片Slice2、第三片Slice3、第四片Slice4。在一个示例中,将一整副图像平均划分为Slice1、Slice2、Slice3、Slice4,其中,一整副图像渲染时间为8ms、送显的时间为8ms,那么平均划分后的Slice1、Slice2、Slice3、Slice4每片图像的渲染时间为2ms、送显的时间为2ms。After acquiring the sensor data and calculating the pose data by the processor, the head-mounted electronic device 200 acquires the image data of a whole set of images for rendering. As shown in FIG. 11 , it is assumed that the whole image of the first image is horizontally divided into four parts from top to bottom, which are respectively the first Slice1 , the second Slice2 , the third Slice3 , and the fourth Slice4 . In an example, a whole set of images is divided into Slice1, Slice2, Slice3, and Slice4 on average, wherein, the rendering time of a whole set of images is 8ms, and the time for displaying is 8ms, then the average divided Slice1, Slice2, Slice3, The rendering time of each image in Slice4 is 2ms, and the displaying time is 2ms.
GPU等处理器先渲染Slice1,耗时2ms,当渲染完Slice1之后,将渲染后的图像像素数据发送至显示装置(如显示屏等显示器件),又可简称为送显,耗时2ms,如果显示装置包括帧缓冲区,那么送显可以是指将渲染后的图像像素数据发送至帧缓冲区;在Slice1渲染完送显的同时,GPU等处理器渲染Slice2,耗时2ms,然后送显,耗时2ms;在Slice2渲染完成后送显的同时,GPU等处理器渲染Slice3,耗时2ms,然后送显,耗时2ms;在Slice3渲染完成后送显的同时,GPU等处理器渲染Slice4,耗时2ms,然后送显,耗时2ms。GPU and other processors first render Slice1, which takes 2ms. After rendering Slice1, the rendered image pixel data is sent to the display device (such as a display device such as a display screen), which can also be referred to as sending display, and takes 2ms. The display device includes a frame buffer, so sending for display may refer to sending the rendered image pixel data to the frame buffer; while Slice1 is rendered and sent for display, processors such as GPU render Slice2, which takes 2ms, and then send to display. It takes 2ms; while Slice2 is rendered and sent to the display, the GPU and other processors render Slice3, which takes 2ms, and then sent to the display, which takes 2ms; after Slice3 is rendered and sent to the display, the GPU and other processors render Slice4. It takes 2ms, and then it is sent to the display, which takes 2ms.
在一些实施例中,显示屏包括帧缓冲区,当送显完Slice1、Slice2、Slice3、Slice4后,配合Vsync信号,显示屏从帧缓冲区中提取第一图像的完整数据,进行显示。如图12所示,当送显完Slice1后,帧缓冲区中更新有Slice1的图像数据;当送显完Slice2后,帧缓冲区中更新有Slice1和Slice2的图像数据;当送显完Slice3后,帧缓冲区中更新有Slice1、Slice2和Slice3的图像数据;当送显完Slice4后,帧缓冲区中更新有Slice1、Slice2、Slice3、Slice4的图像数据,即完整的第一图像的数据。配合显示信号,显示屏从帧缓冲区中提取第一图像的像素数据并完整显示第一图像。In some embodiments, the display screen includes a frame buffer, and after sending and displaying Slice1, Slice2, Slice3, and Slice4, the display screen extracts the complete data of the first image from the frame buffer for display in conjunction with the Vsync signal. As shown in Figure 12, after sending and displaying Slice1, the image data of Slice1 is updated in the frame buffer; after sending and displaying Slice2, the image data of Slice1 and Slice2 are updated in the frame buffer; after sending and displaying Slice3 , the image data of Slice1, Slice2, and Slice3 are updated in the frame buffer; after sending and displaying Slice4, the image data of Slice1, Slice2, Slice3, and Slice4 are updated in the frame buffer, that is, the data of the complete first image. Cooperating with the display signal, the display screen extracts the pixel data of the first image from the frame buffer and completely displays the first image.
需要注意的是,在显示屏刷新屏幕之前,所有各部分图像,即Slice1、Slice2、Slice3、Slice4应该送显完成,这样显示屏从帧缓冲区中可以提取完整的第一图像的图像数据进行显示。如果在屏幕刷新前,帧缓冲区中只更新了Slice1、Slice2、Slice3的图像数据,那么在屏幕刷新时,Slice1、Slice2、Slice3对应的区域会显示Slice1、Slice2、Slice3更新的图像,而在Slice4区域,会显示未更新的上一帧图像的残留部分图像,出现画面卡顿、撕裂的现象。It should be noted that before the display refreshes the screen, all the images of each part, that is, Slice1, Slice2, Slice3, and Slice4 should be sent to the display, so that the display can extract the complete image data of the first image from the frame buffer for display. . If only the image data of Slice1, Slice2, and Slice3 are updated in the frame buffer before the screen is refreshed, then when the screen is refreshed, the areas corresponding to Slice1, Slice2, and Slice3 will display the updated images of Slice1, Slice2, and Slice3, while in Slice4 In the area, the remaining part of the image of the previous frame that has not been updated will be displayed, and the phenomenon of screen freezing and tearing will appear.
在另一些实施例中,显示屏没有帧缓冲区,如图12所示,当送显完Slice1后,显示屏直 接显示Slice1的图像;当送显完Slice2后,显示屏显示Slice1和Slice2的图像;当送显完Slice3后,显示屏显示Slice1、Slice2和Slice3的图像;当送显完Slice4后,显示屏显示Slice1、Slice2、Slice3、Slice4的图像,即完整的显示第一图像。In some other embodiments, the display screen does not have a frame buffer, as shown in Figure 12, when Slice1 is sent and displayed, the display screen directly displays the image of Slice1; when Slice2 is sent and displayed, the display screen displays the images of Slice1 and Slice2 ; After sending and displaying Slice3, the display screen displays the images of Slice1, Slice2 and Slice3; when sending and displaying Slice4, the display screen displays the images of Slice1, Slice2, Slice3, and Slice4, that is, the first image is completely displayed.
经过上述这样的分片图像渲染与图像送显的并行处理后,如图10所示,这一整副图像从渲染到完全送显总共耗时10ms,相比图9所示的图像渲染与图像送显的串行处理方式耗时16ms,可以节省6ms的时间。After the above-mentioned parallel processing of fragmented image rendering and image sending to display, as shown in Figure 10, the entire image takes a total of 10ms from rendering to full display, compared to the image rendering and image display shown in Figure 9 The serial processing method of sending the display takes 16ms, which can save 6ms of time.
本实施例对划分每副图像的方式不作任何限制,可以划分为四片、六片等任意数量,可以横向、竖向或任意方向划分,可以平均划分、也可以不平均划分。通常情况下,平均划分图像比不平均划分图像可以更充分利用各模块并行处理的能力,节省更多时间。当平均划分时,每一个Slice的大小基本相同或相同,当不平均划分时,Slice的大小可以不同。This embodiment does not impose any restrictions on the way of dividing each image, which can be divided into any number such as four or six, can be divided horizontally, vertically or in any direction, and can be divided evenly or unevenly. Usually, dividing the image evenly can make full use of the parallel processing capability of each module and save more time than dividing the image unevenly. When divided evenly, the size of each slice is basically the same or the same; when divided unevenly, the size of each slice can be different.
参考图13,这里对比图10,给出一个不平均划分图像的情况下耗费时间的示例。假设,一整副图像渲染时间为8ms、送显的时间为8ms,在一个示例中,将图像不平均地划分为2:3:1:2比例的第五片Slice5、第六片Slice6、第七片Slice7、第八片Slice8,每片图像的渲染时间分别为2ms、3ms、1ms、2ms,送显的时间分别为2ms、3ms、1ms、2ms。Referring to FIG. 13 , comparing with FIG. 10 , an example of time-consuming in the case of unevenly dividing images is given. Assume that the rendering time of an entire image is 8ms, and the displaying time is 8ms. In one example, the image is divided into the fifth slice Slice5, the sixth slice Slice6, and the sixth slice in a ratio of 2:3:1:2. Seven slices of Slice7 and eighth slice of Slice8, the rendering time of each image is 2ms, 3ms, 1ms, 2ms respectively, and the time of sending to display is 2ms, 3ms, 1ms, 2ms respectively.
GPU等处理器先渲染Slice5,耗时2ms,当渲染完Slice5之后,将其送显,耗时2ms;在Slice5渲染完送显的同时,GPU等处理器渲染Slice6,耗时3ms,然后送显,耗时3ms;在Slice6渲染完成后送显的同时,GPU等处理器渲染Slice7,耗时1ms,然后待Slice6送显完成后,开始将Slice7送显,耗时1ms;在Slice7渲染完成后,GPU等处理器渲染Slice8,耗时2ms,在Slice7送显完成后,将Slice8送显,耗时2ms。GPU and other processors first render Slice5, which takes 2ms. After rendering Slice5, it takes 2ms to send it to the display. When Slice5 is rendered and sent to the display, the GPU and other processors render Slice6, which takes 3ms. , it takes 3ms; while Slice6 is finished rendering, GPU and other processors render Slice7, which takes 1ms. It takes 2ms for GPU and other processors to render Slice8. After Slice7 is sent to display, it takes 2ms to send Slice8 to display.
经过上述这样的分片图像渲染与图像送显的并行处理后,如图13所示,这一整副图像从渲染到完全送显总共耗时11ms,相比图9所示的图像渲染与图像送显的串行处理方式耗时16ms,可以节省5ms的时间,但是比图10所示的平均划分的并行处理方式,多耗费1ms的时间,这是由于平均划分图像更能充分并行利用渲染、送显模块的性能,减少不必要的空转时间。当图像渲染的处理能力或送显的速度不匹配时,也可以采用不平均划分的方式。After the above-mentioned parallel processing of fragmented image rendering and image sending to display, as shown in Figure 13, the entire image takes a total of 11ms from rendering to full display, compared to the image rendering and image display shown in Figure 9 The serial processing method of sending display takes 16ms, which can save 5ms time, but it takes 1ms longer than the parallel processing method of average division shown in Figure 10. This is because the average division of images can make full use of rendering, Send display module performance, reduce unnecessary idling time. When the processing power of image rendering or the speed of sending to the display does not match, an uneven division method can also be used.
具体划分图像的方式可以根据实际情况由开发人员自定义设置,较佳的划分方案是在GPU处于效率最大化时,即功耗与性能达到较佳平衡状态下,图像渲染加送显总耗时较短的图像划分方案。The specific way to divide the image can be customized by the developer according to the actual situation. The best division scheme is when the efficiency of the GPU is maximized, that is, when the power consumption and performance are in a better balance, the total time spent on image rendering plus sending to display Shorter image division scheme.
一些实施例中,在VR场景下,VR眼镜在获取图像后,会将图像渲染和送显。In some embodiments, in a VR scene, the VR glasses will render and display the image after acquiring the image.
VR场景中,常见的图像渲染流程包括图像预畸变和时间扭曲等。In VR scenarios, common image rendering processes include image pre-distortion and time warping.
图像畸变,是由于VR眼镜中的光学透镜(如凸透镜)的固有特性,造成正常图像产生失真的畸变现象。产生原因是光线在远离透镜中心的地方比靠近透镜中心的地方更加弯曲。畸变沿着透镜半径方向分布,包括桶形畸变和枕形畸变等。Image distortion is a phenomenon in which normal images are distorted due to the inherent characteristics of optical lenses (such as convex lenses) in VR glasses. The reason for this is that light rays are more curved away from the center of the lens than closer to the center of the lens. Distortion is distributed along the radius of the lens, including barrel distortion and pincushion distortion.
图像预畸变,是为了对冲VR眼镜中光学透镜的图像畸变效果,而提前对正常图像进行反向畸变的预处理,使得用户最终看到的图像为正常图像。Image pre-distortion is to counteract the image distortion effect of the optical lens in the VR glasses, and perform reverse distortion preprocessing on the normal image in advance, so that the image that the user finally sees is a normal image.
比如说,VR眼镜中光学透镜的图像畸变可以将正常图像形成枕形畸变效果,那在图像渲染阶段,可以提前进行图像预畸变处理,将正常图像变成桶形畸变。那么桶形畸变的图像在经过VR眼镜中光学透镜的枕形畸变效果变化之后,显示在用户眼睛中的图像为正常图像。For example, the image distortion of the optical lens in VR glasses can form a normal image into a pincushion distortion effect, so in the image rendering stage, image pre-distortion processing can be performed in advance to turn the normal image into a barrel distortion. Then, after the barrel-shaped distorted image is changed by the pincushion-shaped distortion effect of the optical lens in the VR glasses, the image displayed in the user's eyes is a normal image.
时间扭曲(timewarp),是一种图像帧修正的技术。在用户使用VR眼镜时,由于头部运动过快,而场景渲染延迟了,即用户头已经转过去了,但是图像还没有渲染出来,或者渲染的是上一帧图像。如果渲染时间太长,一帧就会丢失,产生的结果就是图像抖动。时间扭曲 通过扭曲一副被送往显示器之前的图像,来缓解渲染延迟的问题。最基础的时间扭曲是基于方向的扭曲,纠正了头部的转动变化姿势带来的图像抖动,它可以用较少的计算资源生成一个新的图像帧。Time warp (timewarp), is a technology of image frame correction. When the user uses VR glasses, the rendering of the scene is delayed because the head moves too fast, that is, the user's head has been turned, but the image has not been rendered, or the image of the previous frame is rendered. If the rendering time is too long, a frame will be lost and the result will be a jittery image. Time Warp Alleviates rendering lag by warping an image before it is sent to the display. The most basic time warp is based on the direction of the warp, which corrects the image shake caused by the rotation and posture of the head, and it can generate a new image frame with less computing resources.
时间扭曲,可以在图像渲染帧没有和头部运动达成同步时,产生一个图像替代还没有被渲染出来的帧,即自动填充图像帧,使得前后图像帧平滑过渡。Time warping can generate an image to replace the frame that has not been rendered when the image rendering frame is not synchronized with the head movement, that is, automatically fill the image frame, so that the front and rear image frames transition smoothly.
在一个示例中,如图14所示的VR图像渲染过程中,将一整副图像从上到下平均分成Slice1、Slice2、Slice3、Slice4,如图14中(a)所示的原图纹理。在分片渲染过程中,假设将图像分成4×4的网格进行纹理着色,那么每一个Slice都是1×4的网格,如图14中(b)所示。每一个Slice需要做预畸变处理,对于每一个Slice中的图像顶点,可以通过原图中畸变前的像素位置和畸变公式,计算出该顶点对应的畸变后的像素位置。顶点以外的像素点,可以用插值法(如线性插值、双线性插值、三次样条插值等)计算出畸变后的像素位置,最终生成图14中(b)所示的预畸变效果。In one example, during the VR image rendering process shown in FIG. 14 , the whole image is equally divided into Slice1, Slice2, Slice3, and Slice4 from top to bottom, and the original image texture is shown in (a) in FIG. 14 . In the slice rendering process, assuming that the image is divided into 4×4 grids for texture coloring, then each Slice is a 1×4 grid, as shown in (b) in Figure 14. Each Slice needs to be pre-distorted. For each image vertex in the Slice, the pixel position after distortion corresponding to the vertex can be calculated through the pixel position before distortion in the original image and the distortion formula. Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to calculate the distorted pixel position, and finally generate the pre-distortion effect shown in (b) in Figure 14.
在图14所示的VR图像渲染过程中,还对图像做时间扭曲的处理。通过预测头动的方向,将采集到的图像做相应的旋转处理,得到新的图像帧作为最终的显示图像,如图14中(c)所示的预畸变和图像时间扭曲后的图像。同样的,在时间扭曲的渲染处理过程中,对于每个Slice中的图像顶点,可以通过计算得到预测后的该顶点对应的像素位置。顶点以外的像素点,可以用插值法(如线性插值、双线性插值、三次样条插值等)生成预测后的像素位置。During the rendering process of the VR image shown in FIG. 14 , time warping is also performed on the image. By predicting the direction of head movement, the collected images are rotated accordingly to obtain a new image frame as the final display image, as shown in (c) in Figure 14 after pre-distortion and image time warping. Similarly, during the rendering process of the time warp, for an image vertex in each Slice, the predicted pixel position corresponding to the vertex can be obtained through calculation. Pixels other than vertices can use interpolation methods (such as linear interpolation, bilinear interpolation, cubic spline interpolation, etc.) to generate predicted pixel positions.
最终,经过VR眼镜显示屏和光学透镜,呈现给用户眼睛的图像,为图14中(d)所示的最终成像画面。Finally, through the VR glasses display screen and optical lens, the image presented to the user's eyes is the final imaging picture shown in (d) in Figure 14.
每个图像在GPU内渲染完成后,会将其送至显示装置,简称送显。After each image is rendered in the GPU, it will be sent to the display device, referred to as sending to the display.
在并行实现分片图像渲染和送显过程中,需要注意配合显示屏的硬件Vsync信号,否则可能会产生图像撕裂等现象。In the parallel implementation of fragmented image rendering and displaying, it is necessary to pay attention to the hardware Vsync signal of the display screen, otherwise image tearing may occur.
硬件Vsync信号,是当显示屏刷新完一整帧图像之后发出的一个脉冲信号。如果分片图像渲染和送显过程与Vsync信号不匹配,可能使得显示屏只刷新了一部分图像,未刷新的部分显示的还是上一帧图像的部分,导致出现画面撕裂的现象。The hardware Vsync signal is a pulse signal sent after the display refreshes a whole frame of images. If the fragmented image rendering and display process does not match the Vsync signal, the display may only refresh a part of the image, and the unrefreshed part displays the part of the previous frame of image, resulting in screen tearing.
例如,在
Figure PCTCN2022091722-appb-000016
系统中,SurfaceFlinger可以将缓冲区内的图像数据进行合成,然后发送到显示设备,或者由硬件合成器(hardware composer,HWC)利用硬件完成图像数据的合成并送显。其中Vsync信号与屏幕的刷新率匹配,当一个Vsync信号到来时,屏幕开始从上至下、从左往右的刷新像素。更多的,每一行的刷新还可以配合水平同步(horizontal sync,Hsync)信号,以及每一行的像素可以配合像素时钟(pixel clock,PCLK)信号进行传输。
For example, in
Figure PCTCN2022091722-appb-000016
In the system, SurfaceFlinger can synthesize the image data in the buffer and then send it to the display device, or a hardware composer (hardware composer, HWC) can use hardware to complete the synthesis of the image data and send it to the display. The Vsync signal matches the refresh rate of the screen. When a Vsync signal arrives, the screen starts to refresh pixels from top to bottom and from left to right. What's more, the refreshing of each row can also cooperate with a horizontal sync (horizontal sync, Hsync) signal, and the pixels of each row can cooperate with a pixel clock (pixel clock, PCLK) signal for transmission.
当显示屏侧收到像素信号时,对于LCD屏来说,会从左到右、从上而下写入每个像素数据,在一个Vsync周期内完成一次屏幕的刷新。在分片图像送显过程中,电子设备可以控制显示的时间,在一个周期内的固定时间点去获取缓存内的固定位置的像素点将其显示。比如规定一个时间点,电子设备认为在这个时间点前GPU可以完成对图像的渲染,那么待该时间点到来时,电子设备的HWC就会将图像的像素按顺序送到显示屏的固定位置进行显示。When the display side receives the pixel signal, for the LCD screen, each pixel data will be written from left to right and from top to bottom, and a screen refresh will be completed within one Vsync cycle. During the sending and displaying process of the sliced image, the electronic device can control the display time, and obtain the pixel at a fixed position in the cache at a fixed time point within a cycle to display it. For example, if a time point is specified, the electronic device believes that the GPU can complete the rendering of the image before this time point, then when the time point arrives, the HWC of the electronic device will send the pixels of the image to the fixed position of the display screen in order. show.
同时,在显示过程中还涉及到插黑技术。插黑技术是为了防止人眼的视觉滞留导致的拖影现象。一般来说,可以插入纯黑色的图像帧,或者直接关闭显示屏显示,或者将显示屏的亮度调零等方式实现插黑技术。At the same time, black insertion technology is also involved in the display process. The black insertion technology is to prevent the smear phenomenon caused by the visual retention of the human eye. Generally speaking, you can insert a pure black image frame, or directly turn off the display, or adjust the brightness of the display to zero to realize the black insertion technology.
具体的一个示例中,如图15所示,在两个垂直同步信号间隔周期之内,电子设备同步完成一整帧图像的渲染、送显及显示。在一些实施例中,为了在显示信号来临之前,及时完成送显,可以提前进行图像渲染。In a specific example, as shown in FIG. 15 , within two vertical synchronization signal interval periods, the electronic device synchronously completes the rendering, displaying and displaying of a whole frame of images. In some embodiments, image rendering may be performed in advance in order to complete the display sending in time before the display signal arrives.
在一些实施例中,在收到Vsync信号后,电子设备可以将整个显示屏亮度调至零来实现插黑,其时间约为一个Vsync周期的80%。比如,若显示屏的刷新频率为90Hz,其Vsync周期为11.1ms,那么插黑的时间约为8.9ms。In some embodiments, after receiving the Vsync signal, the electronic device can adjust the brightness of the entire display screen to zero to implement black insertion, and the time is about 80% of a Vsync period. For example, if the refresh rate of the display screen is 90Hz and its Vsync period is 11.1ms, then the black insertion time is about 8.9ms.
当插黑完,显示屏点亮时,电子设备需要保证需要显示的图像帧已经全部在屏幕上刷新完成,以保证用户可以看到新的完整的图像画面。When the black is inserted and the display is turned on, the electronic device needs to ensure that all image frames to be displayed have been refreshed on the screen, so that the user can see a new and complete image.
因此如图15所示,在插黑结束前,电子设备需要将所有的分片图像(即Slice1、Slice2、Slice3、Slice4)渲染和送显完成。在Slice1渲染完成后,将Slice1送显;在Slice1送显的同时,对Slice2进行渲染。在对Slice2渲染完成后,将Slice2送显;在Slice2送显的同时,对Slice3进行渲染。在对Slice3渲染完成后,将Slice3送显;在Slice3送显的同时,对Slice4进行渲染。在对Slice4渲染完成后,将Slice4送显。至此,一整副图像渲染和送显完毕,送显的过程处于插黑阶段中。当Slice1、Slice2、Slice3、Slice4都送显后,插黑结束,显示屏点亮,显示屏显示Slice1、Slice2、Slice3、Slice4组成的一整副完整的图像。其中,图像渲染和图像送显时间周期均小于一个Vsync周期。Therefore, as shown in FIG. 15 , before the black insertion is completed, the electronic device needs to render and send all the sliced images (that is, Slice1 , Slice2 , Slice3 , and Slice4 ) to display. After the rendering of Slice1 is completed, Slice1 is sent to display; while Slice1 is sent to display, Slice2 is rendered. After the rendering of Slice2 is completed, Slice2 is sent to display; while Slice2 is sent to display, Slice3 is rendered. After the rendering of Slice3 is completed, Slice3 is sent to display; while Slice3 is sent to display, Slice4 is rendered. After rendering Slice4, send Slice4 to display. So far, a whole set of images has been rendered and sent to the display, and the process of sending to the display is in the black insertion stage. When Slice1, Slice2, Slice3, and Slice4 are all sent to the display, the black insertion is completed, the display screen lights up, and the display screen displays a complete pair of images composed of Slice1, Slice2, Slice3, and Slice4. Wherein, the time periods of image rendering and image sending and display are both shorter than one Vsync period.
下面介绍显示屏关闭显示的实现方式和显示屏定焦的实现。The following describes the implementation of turning off the display on the display screen and the implementation of focusing on the display screen.
(1)关闭显示屏显示(1) Turn off the display
本申请实施例中,对于LCD显示屏来说,关闭显示屏显示可以包含以下任一种情况:①关闭显示屏的背光供电。②关闭显示屏的背光供电和显示面板供电。③关闭显示屏背光供电、显示面板、屏驱动集成电路(integrated circuit,IC)和背光驱动IC。In the embodiment of the present application, for the LCD display, turning off the display of the display may include any of the following situations: ① Turn off the backlight power supply of the display. ② Turn off the backlight power supply of the display screen and the power supply of the display panel. ③Turn off the display backlight power supply, display panel, screen driver integrated circuit (integrated circuit, IC) and backlight driver IC.
当控制背光驱动IC关闭背光供电时,处理器依然通过屏驱动IC和背光驱动IC向显示面板发送显示数据,但由于背光被关闭,显示屏无法显示图像。由于显示数据一直被发送给显示面板,重新恢复背光供电恢复速度快。When controlling the backlight driver IC to turn off the backlight power supply, the processor still sends display data to the display panel through the screen driver IC and the backlight driver IC, but because the backlight is turned off, the display screen cannot display images. Since the display data is always sent to the display panel, the restoration of backlight power supply is fast.
当控制关闭显示面板供电时,显示面板无法接收屏驱动IC发送的显示数据,显示面板初始配置数据丢失。重新恢复显示屏供电时需要对每个像素进行初始化配置(例如一些初始电位赋值等)。因此恢复显示屏显示的速度较慢,可以节省显示面板电能消耗。在对显示屏恢复显示的响应速度要求不高的情况下,可以将显示面板关闭来进一步节省功耗。When the control turns off the power supply of the display panel, the display panel cannot receive the display data sent by the screen driver IC, and the initial configuration data of the display panel is lost. When the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Therefore, the speed of restoring the display on the display screen is relatively slow, which can save the power consumption of the display panel. In the case that the response speed of the display screen to restore display is not high, the display panel can be turned off to further save power consumption.
当控制关闭背光驱动IC后,背光驱动IC无法接收处理器发送的背光数据。背光驱动IC也无法接收屏驱动IC发送的色彩数据。重新恢复背光驱动IC供电时类似的需要对背光驱动IC进行初始化配置。当控制关闭屏驱动IC后,屏驱动IC无法接收处理器发送的显示数据,也无法向背光驱动IC发送色彩数据。重新恢复屏驱动IC供电时类似的也需要对屏驱动IC进行初始化配置。因此重新恢复显示屏显示的速度慢。When the control turns off the backlight driver IC, the backlight driver IC cannot receive the backlight data sent by the processor. The backlight driver IC also cannot receive the color data sent by the screen driver IC. When the power supply of the backlight driver IC is restored again, it is similarly necessary to initialize the configuration of the backlight driver IC. When the control turns off the screen driver IC, the screen driver IC cannot receive the display data sent by the processor, nor can it send color data to the backlight driver IC. Similarly, when the power supply of the screen driver IC is restored, initial configuration of the screen driver IC is also required. Therefore, the speed of restoring the display display is slow.
本申请实施例中,对于OLED显示屏来说,关闭显示屏显示可以包含以下任一种情况:①关闭OLED显示面板供电。②关闭OLED显示面板供电和屏驱动IC供电。In the embodiment of the present application, for the OLED display screen, turning off the display screen display may include any of the following situations: ① Turning off the power supply of the OLED display panel. ② Turn off the OLED display panel power supply and screen driver IC power supply.
其中,当处理器控制关闭OLED显示面板供电时,处理器依然向屏驱动IC发送显示数据,但由于OLED显示面板供电被关闭,OLED显示屏无法显示图像。重新恢复显示屏供电时需要对每个像素进行初始化配置(例如一些初始电位赋值等)。由于显示数据一直被发送给OLED显示面板,重新恢复OLED显示面板供电恢复速度快。Wherein, when the processor controls to turn off the power supply of the OLED display panel, the processor still sends display data to the screen driver IC, but since the power supply of the OLED display panel is turned off, the OLED display cannot display images. When the power supply of the display screen is restored, it is necessary to initialize and configure each pixel (such as some initial potential assignments, etc.). Since the display data is always sent to the OLED display panel, the restoration of the power supply of the OLED display panel is fast.
当控制关闭屏驱动IC后,屏驱动IC无法接收处理器发送的显示数据,也无法向OLED显示面板发送显示数据。重新恢复屏驱动IC供电时类似的也需要对屏驱动IC进行初始化配置。重新恢复屏驱动IC供电恢复速度较慢。When the screen driver IC is controlled to be turned off, the screen driver IC cannot receive display data sent by the processor, nor can it send display data to the OLED display panel. Similarly, when the power supply of the screen driver IC is restored, initial configuration of the screen driver IC is also required. It is slow to restore the power supply of the screen driver IC.
另外,处理器可以控制OLED显示面板部分区域内的像素供电被关闭。则该部分区域内无法显示图像。可以实现关闭显示屏上部分区域的显示。In addition, the processor may control the power supply of pixels in a part of the OLED display panel to be turned off. Then the image cannot be displayed in this part of the area. It can be realized to turn off the display of some areas on the display screen.
(2)显示屏定焦(2) Display fixed focus
显示屏上显示的图像可以在用户眼球中成虚拟的图像。可以通过对显示屏的光学设计将该虚拟的图像对应的焦点设置在距离用户眼球前方一定距离内,例如2米或4米。该距离还可以是一个距离区间,例如2-4米。则在显示屏上显示的图像对用户眼球看来即成像在用户眼球前方定焦焦点上。The image displayed on the display screen can become a virtual image in the user's eyes. The focal point corresponding to the virtual image can be set within a certain distance from the front of the user's eyes, such as 2 meters or 4 meters, through the optical design of the display screen. The distance may also be a distance interval, such as 2-4 meters. Then the image displayed on the display screen appears to the user's eyeballs as being imaged on the fixed focal point in front of the user's eyeballs.
上述实施例可以应用于电子设备独立显示图像的情况,还可以应用于多个设备配合显示图像的情况。The foregoing embodiments may be applied to a case where an electronic device independently displays an image, and may also be applied to a case where multiple devices cooperate to display an image.
结合前述通信系统30,图16示出了一种基于无线传输的通信系统40。通信系统40中包括电脑A和头戴式显示设备200,电脑A和头戴式显示设备200之间通过无线连接41传输数据。图像渲染可以由电脑A完成,也可以由头戴式显示设备200完成。Combining with the foregoing communication system 30, FIG. 16 shows a communication system 40 based on wireless transmission. The communication system 40 includes a computer A and a head-mounted display device 200 , and data is transmitted between the computer A and the head-mounted display device 200 through a wireless connection 41 . The image rendering can be completed by the computer A, or by the head-mounted display device 200 .
在一些实施例中,图像渲染部分由电脑A完成。头戴式显示设备200可以将传感器检测到的传感器数据通过无线连接41传输给电脑A,电脑A经过位姿计算等处理,得到待渲染的一帧图像。然后电脑A可以将一整帧图像分片,对每片图像并行进行渲染,每渲染完一片图像,就对其进行编码,然后通过无线连接41分片传输至头戴式显示设备200。头戴式显示设备200通过无线连接41接收到来自于电脑A的各片图像数据后,再对每片图像数据并行进行解码及送显,最终显示出来。In some embodiments, the image rendering portion is performed by computer A. The head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then computer A can divide a whole frame of image into slices, render each slice of image in parallel, encode each slice of image after rendering, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives each piece of image data from the computer A through the wireless connection 41 , each piece of image data is decoded and displayed in parallel, and finally displayed.
结合图17说明上述无线传输场景下并行分片传输图片的过程。The process of transmitting pictures in parallel slices in the above wireless transmission scenario is described with reference to FIG. 17 .
在一个示例中,将从生成传感器数据、位姿计算等处理,得到待渲染的一帧图像之前的各个过程合成为图像获取。电脑A在获取到待渲染的一整帧第一图像后,可以将一整帧第一图像划分为四份,分别为Slice1、Slice2、Slice3、Slice4。In one example, processes from generating sensor data, pose calculation, etc. to obtaining a frame of image to be rendered are synthesized into image acquisition. After the computer A obtains the entire frame of the first image to be rendered, it can divide the entire frame of the first image into four parts, namely Slice1, Slice2, Slice3, and Slice4.
电脑A先对Slice1进行渲染,渲染完成后再对Slice1进行编码。在Slice1渲染完成后,对Slice1进行编码的同时,电脑A对Slice2进行渲染,并在渲染完成后对Slice2进行编码。电脑A对Slice1编码完成后通过无线连接41传输给头戴式显示设备200。在电脑A对Slice1编码完成后,向头戴式显示设备200传输Slice1的同时,电脑A对Slice2进行编码。在Slice2渲染完成后,对Slice2进行编码的同时,电脑A对Slice3进行渲染,并在渲染完成后对Slice3进行编码。以此类推,在电脑A对Slice2编码完成后,向头戴式显示设备200传输Slice2的同时,电脑A对Slice3进行编码。在Slice3渲染完成后,对Slice3进行编码的同时,电脑A对Slice4进行渲染,并在渲染完成后对Slice4进行编码。在电脑A对Slice3编码完成后,向头戴式显示设备200传输Slice3的同时,电脑A对Slice4进行编码,然后传输给头戴式显示设备200。Computer A renders Slice1 first, and then encodes Slice1 after rendering. After the rendering of Slice1 is completed, while encoding Slice1, computer A renders Slice2, and encodes Slice2 after rendering. Computer A encodes Slice1 and then transmits it to the head-mounted display device 200 through the wireless connection 41 . After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . After the rendering of Slice2 is completed, while encoding Slice2, computer A renders Slice3, and encodes Slice3 after rendering. By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 . After the rendering of Slice3 is completed, while encoding Slice3, computer A renders Slice4, and encodes Slice4 after rendering. After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
同样的,头戴式显示设备200在接收到Slice1后,对Slice1进行解码。在Slice1解码完成后,头戴式显示设备200对Slice1进行送显。在解码Slice1之后,送显Slice1的同时,头戴式显示设备200对接收到的Slice2进行解码。以此类推,在解码Slice2之后,对Slice2进行送显,送显Slice2的同时,头戴式显示设备200对接收到的Slice3进行解码。在解码Slice3之后,对Slice3进行送显,送显Slice3的同时,头戴式显示设备200对接收到的Slice4进行解码。在解码Slice4之后,对Slice4进行送显。Similarly, after receiving Slice1, the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 sends and displays Slice1. After decoding Slice1, while sending and displaying Slice1, the head-mounted display device 200 decodes the received Slice2. By analogy, after Slice2 is decoded, Slice2 is sent for display, and at the same time as Slice2 is sent for display, the head-mounted display device 200 decodes the received Slice3. After Slice3 is decoded, Slice3 is sent for display, and at the same time as Slice3 is sent for display, the head-mounted display device 200 decodes the received Slice4. After Slice4 is decoded, Slice4 is sent for display.
在Slice1、Slice2、Slice3、Slice4都完成送显后,配合显示信号,头戴式显示设备200可以在显示屏上显示完整的第一图像。After Slice1 , Slice2 , Slice3 , and Slice4 are all sent to the display, in cooperation with the display signal, the head-mounted display device 200 can display the complete first image on the display screen.
在另一些实施例中,图像渲染部分由头戴式显示设备200完成。头戴式显示设备200可以将传感器检测到的传感器数据通过无线连接41传输给电脑A,电脑A经过位姿计算等处 理,得到待渲染的一帧图像。然后电脑A可以将一整帧图像分片,并对每片图像进行编码,然后通过无线连接41分片传输至头戴式显示设备200。头戴式显示设备200通过无线连接41接收到来自于电脑A的各片图像后,对每片图像并行进行解码、渲染及送显,最终显示出来。In some other embodiments, the image rendering part is completed by the head-mounted display device 200 . The head-mounted display device 200 can transmit the sensor data detected by the sensor to the computer A through the wireless connection 41, and the computer A obtains a frame of image to be rendered after processing such as pose calculation. Then the computer A can divide a whole frame of images into slices, encode each slice of images, and then transmit the slices to the head-mounted display device 200 through the wireless connection 41 . After the head-mounted display device 200 receives various images from the computer A through the wireless connection 41 , it decodes, renders, and displays each image in parallel, and finally displays them.
结合图18说明上述无线传输场景下并行分片传输图片的过程。The process of transmitting pictures in parallel slices in the above wireless transmission scenario is described with reference to FIG. 18 .
在一个示例中,将从生成传感器数据、位姿计算等处理,得到还未渲染的一帧图像之前的各个过程合成为图像获取。电脑A在获取到一整帧第一图像后,可以将一整帧第一图像划分为四份,分别为Slice1、Slice2、Slice3、Slice4。In one example, the processes from generating sensor data, pose calculation, etc. to obtaining a frame of image that has not yet been rendered are synthesized into image acquisition. After the computer A acquires the whole frame of the first image, it can divide the whole frame of the first image into four parts, namely Slice1, Slice2, Slice3 and Slice4.
电脑A先对Slice1进行编码,编码完成后通过无线连接41传输给头戴式显示设备200。在电脑A对Slice1编码完成后,向头戴式显示设备200传输Slice1的同时,电脑A对Slice2进行编码。以此类推,在电脑A对Slice2编码完成后,向头戴式显示设备200传输Slice2的同时,电脑A对Slice3进行编码。在电脑A对Slice3编码完成后,向头戴式显示设备200传输Slice3的同时,电脑A对Slice4进行编码,然后传输给头戴式显示设备200。The computer A encodes Slice1 first, and then transmits the encoding to the head-mounted display device 200 through the wireless connection 41 after the encoding is completed. After the computer A completes the encoding of the Slice1, the computer A encodes the Slice2 while transmitting the Slice1 to the head-mounted display device 200 . By analogy, after computer A finishes encoding Slice2, computer A encodes Slice3 while transmitting Slice2 to the head-mounted display device 200 . After the computer A encodes Slice3 and transmits Slice3 to the head-mounted display device 200 , the computer A encodes Slice4 and then transmits it to the head-mounted display device 200 .
同样的,头戴式显示设备200在接收到Slice1后,对Slice1进行解码。在Slice1解码完成后,头戴式显示设备200对Slice1进行渲染及送显。在解码Slice1之后,渲染Slice1的同时,头戴式显示设备200对接收到的Slice2进行解码。以此类推,在解码Slice2之后,对Slice2进行渲染及送显,渲染Slice2的同时,头戴式显示设备200对接收到的Slice3进行解码。在解码Slice3之后,对Slice3进行渲染及送显,渲染Slice3的同时,头戴式显示设备200对接收到的Slice4进行解码。在解码Slice4之后,对Slice4进行渲染及送显。Similarly, after receiving Slice1, the head-mounted display device 200 decodes Slice1. After the decoding of Slice1 is completed, the head-mounted display device 200 renders and displays Slice1. After decoding Slice1, while rendering Slice1, the head-mounted display device 200 decodes the received Slice2. By analogy, after Slice2 is decoded, Slice2 is rendered and displayed. While rendering Slice2, the head-mounted display device 200 decodes the received Slice3. After Slice3 is decoded, Slice3 is rendered and displayed. While rendering Slice3, the head-mounted display device 200 decodes the received Slice4. After Slice4 is decoded, Slice4 is rendered and displayed.
在Slice1、Slice2、Slice3、Slice4都完成送显后,配合显示信号,头戴式显示设备200可以在显示屏上显示完整的第一图像。After Slice1 , Slice2 , Slice3 , and Slice4 are all sent to the display, in cooperation with the display signal, the head-mounted display device 200 can display the complete first image on the display screen.
上述实施例对图像编解码技术不作任何限制。在一些实施例中,图像编解码可以采用H.265等技术,旨在压缩图像大小,加快图像传输。其中,编码步骤、传输步骤、解码步骤也可以合成为传输步骤。The foregoing embodiments do not impose any limitation on the image encoding and decoding technology. In some embodiments, the image codec may adopt technologies such as H.265, aiming at compressing image size and speeding up image transmission. Wherein, the encoding step, transmission step, and decoding step can also be synthesized into a transmission step.
在整个传输过程中,发送端(电脑A)和接收端(头戴式显示设备200)都对分片图像的各个阶段(编码、传输、解码、渲染、送显)进行并行处理,大大缩短了图像传输及显示过程中的延迟时间,图像可以更快的显示。图像渲染和图像送显的具体描述,可以参考前面的描述,此处不再赘述。During the entire transmission process, both the sending end (computer A) and the receiving end (head-mounted display device 200) perform parallel processing on each stage (encoding, transmission, decoding, rendering, and display) of the sliced image, which greatly shortens the processing time. The delay time in the image transmission and display process, the image can be displayed faster. For the specific description of image rendering and image display, you can refer to the previous description, and will not repeat them here.
需要说明的是,图16、图17、图18描述的仅为示例,并不对本申请其他实施例构成限制。通信系统40所示的发送端和接收端也可以是其他任何类型的电子设备。发送端和接收端的连接类型也不限于无线连接,还可以是有线连接或其他连接。发送端和接收端之间传输的内容也不限于图片,还可以是文件、视频等等。划分图像的方式也不局限于该示例。基于同一方案的其他场景均在本申请保护范围之内。It should be noted that the descriptions in FIG. 16 , FIG. 17 , and FIG. 18 are only examples, and do not limit other embodiments of the present application. The sending end and receiving end shown in the communication system 40 may also be any other types of electronic devices. The connection type between the sending end and the receiving end is not limited to wireless connection, and may also be wired connection or other connections. The content transmitted between the sending end and the receiving end is not limited to pictures, but can also be files, videos, etc. The way of dividing an image is not limited to this example, either. Other scenarios based on the same scheme are within the protection scope of this application.
结合前述实施例,下面介绍本申请实施例提供的一种图像传输与显示方法。With reference to the foregoing embodiments, an image transmission and display method provided by the embodiments of the present application is introduced below.
实施例一Embodiment one
本实施例以第一设备、第二设备组成的通信系统为例进行说明。第一设备和第二设备配合显示图像,其中,第二设备为图像发送端,第一设备为图像接收端,第一设备包括显示装置,用于显示图像。在实施例一中,图像渲染由第一设备完成。In this embodiment, a communication system composed of a first device and a second device is taken as an example for description. The first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images. In Embodiment 1, image rendering is completed by the first device.
本申请实施例对各个设备的类型不作任何限制,可以是手机、PC、智慧屏、头戴式显示设备等。可以理解的是,本实施例提供的方法仅为示例,并不对本申请其他实施例构成任何限制,实际业务场景中可以包括更多或更少的终端设备。比如,第一设备为前述电脑A,第 二设备为前述头戴式显示设备200,第一设备和第二设备组成通信系统40,该通信系统40可以利用VR、AR、MR等技术显示图像,使得用户感受到虚拟环境,为用户提供VR/AR/MR体验。本实施例不限制第一设备和第二设备所搭载的操作系统,第一设备或第二设备的软件系统包括但不限于
Figure PCTCN2022091722-appb-000017
或者其它操作系统。
The embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario. For example, the first device is the aforementioned computer A, the second device is the aforementioned head-mounted display device 200, the first device and the second device form a communication system 40, and the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience. This embodiment does not limit the operating systems carried by the first device and the second device. The software system of the first device or the second device includes but is not limited to
Figure PCTCN2022091722-appb-000017
or other operating systems.
图19是实施例一提供的图像传输与显示方法的流程图,具体包括步骤如下:Fig. 19 is a flow chart of the image transmission and display method provided by Embodiment 1, which specifically includes the following steps:
S101,第一设备与第二设备建立第一连接。S101. A first device establishes a first connection with a second device.
在本申请实施例中,第一设备和第二设备之间建立的第一连接可以包括有线连接,如HDMI连接、DP连接、USB连接等,也可以包括无线连接,如蓝牙连接、Wi-Fi连接、热点连接等,还可以为互联网Internet连接,实现第一设备与第二设备之间在同账号、无账号或异账号情况下通信。本申请实施例对第一连接的类型不作限制,此外,第一连接也可以是结合上述任意几种方式的通信连接,本申请实施例对此不做限制。In this embodiment of the application, the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi The connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account. The embodiment of the present application does not limit the type of the first connection. In addition, the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
在一种实现方式中,第一设备可以基于Wi-Fi的近场组网通信与第二设备建立第一连接,如Wi-Fi P2P连接。In an implementation manner, the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
在第一设备与第二设备建立第一连接后,可以互相发送数据。本实施例中传输的数据是多帧图像对应的数据。当然,本实施例对第一设备和第二设备之间传输的数据的类型并不作任何限制,除了图像,还可以传输视频、音频、文本等。After the first connection is established between the first device and the second device, data can be sent to each other. The data transmitted in this embodiment is data corresponding to multiple frames of images. Of course, this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device. In addition to images, video, audio, text, etc. can also be transmitted.
S102,第二设备获取到第三图像,将第三图像分为第一图像和第二图像。S102. The second device obtains the third image, and divides the third image into a first image and a second image.
第二设备获取到的第三图像,可以是由第二设备根据第一设备反馈的传感器数据生成的第三图像,也可以是第二设备自身生成的图像,本实施例对第三图像的获取来源、获取过程不作限制。The third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself. The acquisition of the third image in this embodiment The source and acquisition process are not limited.
第二设备在获取到第三图像后,传输第三图像之前,可以将第三图像分割,比如将整幅第三图像区域分割成各自互不交叉的小区域。After acquiring the third image and before transmitting the third image, the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
在本实施例中,是以将第三图像分为两部分,即第一图像和第二图像,为示例进行说明。在本申请其他实施例中,也可以将第三图像分成三部分及以上,分割区域的大小也不作任何限定,分割的依据也不作任何限定等。只要分割之后的各部分图像可以组合成完整的第三图像即可。In this embodiment, the third image is divided into two parts, that is, the first image and the second image, as an example for description. In other embodiments of the present application, the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
S103,第二设备对第一图像进行编码。S103. The second device encodes the first image.
在图像传输过程中,为了提高传输效率,降低图像失真率,可以在图像传输前后进行编解码。第一设备和第二设备可以根据第一连接的类型、传输内容进行编解码格式的协商。本实施例对编解码的方式不作任何限制,甚至在一些情况下,可能没有编解码过程。In the process of image transmission, in order to improve transmission efficiency and reduce image distortion rate, codec can be performed before and after image transmission. The first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
S104,第二设备向第一设备发送第一图像。S104. The second device sends the first image to the first device.
第二设备对第一图像编码完成后,向第一设备发送第一图像。After the second device finishes encoding the first image, it sends the first image to the first device.
S105,第二设备对第二图像进行编码。S105. The second device encodes the second image.
在第一图像编码完成后,第二设备向第一设备发送第一图像的同时,第二设备对第二图像进行编码。步骤S104和步骤S105是同时发生的,即第二设备向第一设备发送第一图像,与,第二设备对第二图像进行编码,是并行处理的。After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S104 and step S105 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
S106,第二设备向第一设备发送第二图像。S106. The second device sends the second image to the first device.
第二设备对第二图像编码完成后,向第一设备发送第二图像。After the second device finishes encoding the second image, it sends the second image to the first device.
S107,第一设备对第一图像进行解码。S107. The first device decodes the first image.
在第一设备接收到来自于第二设备的第一图像后,对第一图像进行解码。与此同时,第二设备也向第一设备发送第二图像。步骤S106和步骤S107是同时发生的,即第一设备对第 一图像进行解码,与,第二设备向第一设备发送第二图像,是并行处理的。After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S106 and step S107 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
S108,第一设备渲染第一图像。S108. The first device renders the first image.
在解码完第一图像后,第一设备渲染第一图像。After decoding the first image, the first device renders the first image.
S109,第一设备对第二图像进行解码。S109. The first device decodes the second image.
在第一设备解码完第一图像后,第一设备开始解码第二图像。与此同时,第一设备在渲染第一图像。步骤S108和步骤S109是同时发生的,即第一设备渲染第一图像,与,第一设备对第二图像进行解码,是并行处理的。After the first device finishes decoding the first image, the first device starts to decode the second image. Meanwhile, the first device is rendering the first image. Step S108 and step S109 occur simultaneously, that is, the rendering of the first image by the first device and the decoding of the second image by the first device are processed in parallel.
S110,第一设备将第一图像发送至显示装置。S110. The first device sends the first image to the display device.
在渲染完第一图像后,第一设备将第一图像发送至显示装置,又可称为送显。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第一图像发送至帧缓冲区。After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
S111,第一设备渲染第二图像。S111. The first device renders the second image.
在渲染完第一图像后,第一设备渲染第二图像。与此同时,第一设备将第一图像送显。步骤S110和步骤S111是同时发生的,即,第一设备将第一图像送显,与,第一设备渲染第二图像,是并行处理的。After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S110 and step S111 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
S112,第一设备将第二图像发送至显示装置。S112. The first device sends the second image to the display device.
在渲染完第二图像后,第一设备将第二图像发送至显示装置。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第二图像发送至帧缓冲区。After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
S113,第一设备获取第一图像和第二图像的图像数据,显示第三图像,第三图像包括第一图像和第二图像。S113. The first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
在一些实施例中,显示装置包括帧缓冲区,在第一图像和第二图像都送显完后,第一设备等待插黑结束,获取帧缓冲区的第一图像和第二图像的图像像素数据,显示屏点亮,显示完整的第三图像,即第三图像包括第一图像和第二图像。关于图像渲染和图像送显的示例描述,可以参考前述实施例,这里不再赘述。In some embodiments, the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image. For example descriptions of image rendering and image sending and display, reference may be made to the foregoing embodiments, and details are not repeated here.
在另一些实施例中,显示装置没有帧缓冲区,第一设备接收并响应于第一图像的第一图像信号,直接先显示第一图像;第一设备接收并响应于第二图像的第二图像信号,再显示第二图像,此时第一图像和第二图像都完成显示,即完整的显示了第三图像。In some other embodiments, the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image. The image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
在本实施例中,是以第三图像分割为第一图像和第二图像两部分进行描述的,其他分割更多部分的实施例不再赘述。可以理解的是,不论第三图像被分割为几个部分,其并行处理的思想是一致的,在同一时间段,可以对不同部分图像的不同阶段(编码、传输、解码、渲染、送显等)进行并行处理,从而降低图像传输及显示过程中的延迟时间,图像可以更快的显示。In this embodiment, the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
实施例二Embodiment two
本实施例以第一设备、第二设备组成的通信系统为例进行说明。第一设备和第二设备配合显示图像,其中,第二设备为图像发送端,第一设备为图像接收端,第一设备包括显示装置,用于显示图像。与实施例一不同的是,在实施例二中,图像渲染由第二设备完成。In this embodiment, a communication system composed of a first device and a second device is taken as an example for description. The first device and the second device cooperate to display images, wherein the second device is an image sending end, the first device is an image receiving end, and the first device includes a display device for displaying images. Different from Embodiment 1, in Embodiment 2, image rendering is completed by the second device.
本申请实施例对各个设备的类型不作任何限制,可以是手机、PC、智慧屏、头戴式显示设备等。可以理解的是,本实施例提供的方法仅为示例,并不对本申请其他实施例构成任何限制,实际业务场景中可以包括更多或更少的终端设备。比如,第一设备为前述电脑A,第二设备为前述头戴式显示设备200,第一设备和第二设备组成通信系统40,该通信系统40可以利用VR、AR、MR等技术显示图像,使得用户感受到虚拟环境,为用户提供VR/AR/MR 体验。本实施例不限制第一设备和第二设备所搭载的操作系统,第一设备或第二设备的软件系统包括但不限于
Figure PCTCN2022091722-appb-000018
或者其它操作系统。
The embodiment of the present application does not impose any limitation on the type of each device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application, and more or fewer terminal devices may be included in an actual service scenario. For example, the first device is the aforementioned computer A, the second device is the aforementioned head-mounted display device 200, the first device and the second device form a communication system 40, and the communication system 40 can display images using technologies such as VR, AR, and MR. Make users feel the virtual environment and provide users with VR/AR/MR experience. This embodiment does not limit the operating systems carried by the first device and the second device. The software system of the first device or the second device includes but is not limited to
Figure PCTCN2022091722-appb-000018
or other operating systems.
图20是实施例二提供的图像传输与显示方法的流程图,具体包括步骤如下:Fig. 20 is a flow chart of the image transmission and display method provided by Embodiment 2, which specifically includes the following steps:
S201,第一设备与第二设备建立第一连接。S201. The first device establishes a first connection with the second device.
在本申请实施例中,第一设备和第二设备之间建立的第一连接可以包括有线连接,如HDMI连接、DP连接、USB连接等,也可以包括无线连接,如蓝牙连接、Wi-Fi连接、热点连接等,还可以为互联网Internet连接,实现第一设备与第二设备之间在同账号、无账号或异账号情况下通信。本申请实施例对第一连接的类型不作限制,此外,第一连接也可以是结合上述任意几种方式的通信连接,本申请实施例对此不做限制。In this embodiment of the application, the first connection established between the first device and the second device may include a wired connection, such as an HDMI connection, a DP connection, a USB connection, etc., or may include a wireless connection, such as a Bluetooth connection, Wi-Fi The connection, hotspot connection, etc. can also be an Internet connection to realize communication between the first device and the second device under the same account, no account or different account. The embodiment of the present application does not limit the type of the first connection. In addition, the first connection may also be a communication connection combining any of the above methods, which is not limited in the embodiment of the present application.
在一种实现方式中,第一设备可以基于Wi-Fi的近场组网通信与第二设备建立第一连接,如Wi-Fi P2P连接。In an implementation manner, the first device may establish a first connection with the second device based on Wi-Fi near-field networking communication, such as a Wi-Fi P2P connection.
在第一设备与第二设备建立第一连接后,可以互相发送数据。本实施例中传输的数据是多帧图像对应的数据。当然,本实施例对第一设备和第二设备之间传输的数据的类型并不作任何限制,除了图像,还可以传输视频、音频、文本等。After the first connection is established between the first device and the second device, data can be sent to each other. The data transmitted in this embodiment is data corresponding to multiple frames of images. Of course, this embodiment does not impose any limitation on the type of data transmitted between the first device and the second device. In addition to images, video, audio, text, etc. can also be transmitted.
S202,第二设备获取到第三图像,将第三图像分为第一图像和第二图像。S202. The second device obtains the third image, and divides the third image into a first image and a second image.
第二设备获取到的第三图像,可以是由第二设备根据第一设备反馈的传感器数据生成的第三图像,也可以是第二设备自身生成的图像,本实施例对第三图像的获取来源、获取过程不作限制。The third image acquired by the second device may be a third image generated by the second device based on the sensor data fed back by the first device, or an image generated by the second device itself. The acquisition of the third image in this embodiment The source and acquisition process are not limited.
第二设备在获取到第三图像后,传输第三图像之前,可以将第三图像分割,比如将整幅第三图像区域分割成各自互不交叉的小区域。After acquiring the third image and before transmitting the third image, the second device may divide the third image, for example, divide the entire third image area into small areas that do not cross each other.
在本实施例中,是以将第三图像分为两部分,即第一图像和第二图像,为示例进行说明。在本申请其他实施例中,也可以将第三图像分成三部分及以上,分割区域的大小也不作任何限定,分割的依据也不作任何限定等。只要分割之后的各部分图像可以组合成完整的第三图像即可。In this embodiment, the third image is divided into two parts, that is, the first image and the second image, as an example for description. In other embodiments of the present application, the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
S203,第二设备渲染第一图像。S203. The second device renders the first image.
具体渲染过程可以参考前述实施例,这里不再赘述。For the specific rendering process, reference may be made to the foregoing embodiments, and details are not repeated here.
S204,第二设备对第一图像进行编码。S204. The second device encodes the first image.
在图像传输过程中,为了提高传输效率,降低图像失真率,可以在图像传输前后进行编解码。第一设备和第二设备可以根据第一连接的类型、传输内容进行编解码格式的协商。本实施例对编解码的方式不作任何限制,甚至在一些情况下,可能没有编解码过程。In the process of image transmission, in order to improve transmission efficiency and reduce image distortion rate, codec can be performed before and after image transmission. The first device and the second device may negotiate a codec format according to the type of the first connection and the transmission content. This embodiment does not impose any limitation on the way of encoding and decoding, and even in some cases, there may be no encoding and decoding process.
S205,第二设备渲染第二图像。S205. The second device renders the second image.
在编码第一图像的同时,第二设备渲染第二图像。步骤S204和步骤S205是同时发生的,即第二设备对第一图像进行编码,与,第二设备渲染第二图像,是并行处理的。While encoding the first image, the second device renders the second image. Step S204 and step S205 occur simultaneously, that is, the encoding of the first image by the second device and the rendering of the second image by the second device are processed in parallel.
S206,第二设备向第一设备发送第一图像。S206. The second device sends the first image to the first device.
第二设备对第一图像编码完成后,向第一设备发送第一图像。After the second device finishes encoding the first image, it sends the first image to the first device.
S207,第二设备对第二图像进行编码。S207. The second device encodes the second image.
在第一图像编码完成后,第二设备向第一设备发送第一图像的同时,第二设备对第二图像进行编码。步骤S206和步骤S207是同时发生的,即第二设备向第一设备发送第一图像,与,第二设备对第二图像进行编码,是并行处理的。After the encoding of the first image is completed, the second device encodes the second image while sending the first image to the first device. Step S206 and step S207 occur simultaneously, that is, the second device sends the first image to the first device, and the second device encodes the second image in parallel.
S208,第二设备向第一设备发送第二图像。S208. The second device sends the second image to the first device.
第二设备对第二图像编码完成后,向第一设备发送第二图像。After the second device finishes encoding the second image, it sends the second image to the first device.
S209,第一设备对第一图像进行解码。S209. The first device decodes the first image.
在第一设备接收到来自于第二设备的第一图像后,对第一图像进行解码。与此同时,第二设备也向第一设备发送第二图像。步骤S208和步骤S209是同时发生的,即第一设备对第一图像进行解码,与,第二设备向第一设备发送第二图像,是并行处理的。After the first device receives the first image from the second device, it decodes the first image. At the same time, the second device also sends the second image to the first device. Step S208 and step S209 occur simultaneously, that is, the decoding of the first image by the first device and the sending of the second image by the second device to the first device are processed in parallel.
S210,第一设备将第一图像发送至显示装置。S210. The first device sends the first image to the display device.
在渲染完第一图像后,第一设备将第一图像发送至显示装置,又可称为送显。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第一图像发送至帧缓冲区。After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer.
S211,第一设备对第二图像进行解码。S211. The first device decodes the second image.
在第一设备解码完第一图像后,第一设备开始解码第二图像。与此同时,第一设备在将第一图像送显。步骤S210和步骤S211是同时发生的,即第一设备将第一图像送显,与,第一设备对第二图像进行解码,是并行处理的。After the first device finishes decoding the first image, the first device starts to decode the second image. At the same time, the first device is sending the first image for display. Step S210 and step S211 occur simultaneously, that is, the first device sends the first image to display, and the first device decodes the second image, which are processed in parallel.
S212,第一设备将第二图像发送至显示装置。S212. The first device sends the second image to the display device.
在渲染完第二图像后,第一设备将第二图像发送至显示装置。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第二图像发送至帧缓冲区。After rendering the second image, the first device sends the second image to the display device. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
S213,第一设备获取第一图像和第二图像的图像数据,显示第三图像,第三图像包括第一图像和第二图像。S213. The first device acquires image data of the first image and the second image, and displays a third image, where the third image includes the first image and the second image.
在一些实施例中,显示装置包括帧缓冲区,在第一图像和第二图像都送显完后,第一设备等待插黑结束,获取帧缓冲区的第一图像和第二图像的图像像素数据,显示屏点亮,显示完整的第三图像,即第三图像包括第一图像和第二图像。关于图像渲染和图像送显的示例描述,可以参考前述实施例,这里不再赘述。In some embodiments, the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image. For example descriptions of image rendering and image sending and display, reference may be made to the foregoing embodiments, and details are not repeated here.
在另一些实施例中,显示装置没有帧缓冲区,第一设备接收并响应于第一图像的第一图像信号,直接先显示第一图像;第一设备接收并响应于第二图像的第二图像信号,再显示第二图像,此时第一图像和第二图像都完成显示,即完整的显示了第三图像。In some other embodiments, the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image. The image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
在本实施例中,是以第三图像分割为第一图像和第二图像两部分进行描述的,其他分割更多部分的实施例不再赘述。可以理解的是,不论第三图像被分割为几个部分,其并行处理的思想是一致的,在同一时间段,可以对不同部分图像的不同阶段(编码、传输、解码、渲染、送显等)进行并行处理,从而降低图像传输及显示过程中的延迟时间,图像可以更快的显示。In this embodiment, the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (encoding, transmission, decoding, rendering, displaying, etc.) of different parts of the image can be processed. ) for parallel processing, thereby reducing the delay time in the image transmission and display process, and the image can be displayed faster.
实施例三Embodiment Three
本实施例以第一设备独立显示图像为例进行说明。本申请实施例对第一设备的类型不作任何限制,可以是手机、PC、智慧屏、头戴式显示设备等。第一设备包括显示装置,用于显示图像。可以理解的是,本实施例提供的方法仅为示例,并不对本申请其他实施例构成任何限制。比如,第一设备为前述头戴式显示设备200一体机,可以利用VR、AR、MR等技术显示图像,使得用户感受到虚拟环境,为用户提供VR/AR/MR体验。本实施例不限制第一设备所搭载的操作系统,包括但不限于
Figure PCTCN2022091722-appb-000019
或者其它操作系统。
This embodiment is described by taking the first device independently displaying images as an example. The embodiment of the present application does not impose any limitation on the type of the first device, which may be a mobile phone, a PC, a smart screen, a head-mounted display device, and the like. The first device includes display means for displaying images. It can be understood that the method provided in this embodiment is only an example, and does not constitute any limitation to other embodiments of the present application. For example, the first device is the aforementioned head-mounted display device 200 all-in-one, which can display images using technologies such as VR, AR, and MR, so that users can feel the virtual environment and provide users with VR/AR/MR experience. This embodiment does not limit the operating system carried by the first device, including but not limited to
Figure PCTCN2022091722-appb-000019
or other operating systems.
图21是实施例三提供的图像传输与显示方法的流程图,具体包括步骤如下:Fig. 21 is a flowchart of the image transmission and display method provided by the third embodiment, which specifically includes the following steps:
S301,第一设备获取第三图像,将第三图像分为第一图像和第二图像。S301. The first device acquires a third image, and divides the third image into a first image and a second image.
第一设备获取到的第三图像,可以是由第一设备根据各传感器反馈的传感器数据生成的第一图像,也可以是从云服务器接收到的第三图像,本实施例对第三图像的获取来源、获取 过程不作限制。The third image acquired by the first device may be the first image generated by the first device based on the sensor data fed back by each sensor, or the third image received from the cloud server. In this embodiment, the third image There are no restrictions on the source and process of acquisition.
第一设备的GPU在获取到第三图像后,可以将第三图像分割,即将整幅第三图像区域分割成各自互不交叉的小区域。After acquiring the third image, the GPU of the first device may divide the third image, that is, divide the entire third image area into small areas that do not cross each other.
在本实施例中,是以将第三图像分为两部分,即第一图像和第一图像,为示例进行说明。在本申请其他实施例中,也可以将第三图像分成三部分及以上,分割区域的大小也不作任何限定,分割的依据也不作任何限定等。只要分割之后的各部分图像可以组合成完整的第三图像即可。In this embodiment, the third image is divided into two parts, that is, the first image and the first image, as an example for description. In other embodiments of the present application, the third image may also be divided into three or more parts, and the size of the divided area is not limited in any way, and the basis for the division is also not limited in any way. As long as the divided images can be combined into a complete third image.
执行本步骤的不一定是GPU,也可以是第一设备的CPU或其他处理器。It is not necessarily the GPU that performs this step, but also the CPU or other processors of the first device.
S302,第一设备渲染第一图像。S302. The first device renders a first image.
关于VR场景下图像渲染的示例描述,可以参考前述实施例,这里不再赘述。For an example description of image rendering in a VR scene, reference may be made to the foregoing embodiments, and details are not repeated here.
S303,第一设备将第一图像发送至显示装置。S303. The first device sends the first image to the display device.
在渲染完第一图像后,第一设备将第一图像发送至显示装置,又可称为送显。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第一图像发送至帧缓冲区。关于VR场景下图像送显的示例描述,可以参考前述实施例,这里不再赘述。After rendering the first image, the first device sends the first image to the display device, which may also be referred to as display sending. If the display device includes a frame buffer, this step may mean that the first device sends the first image to the frame buffer. For an example description of image transmission and display in a VR scene, reference may be made to the foregoing embodiments, and details are not repeated here.
S304,第一设备渲染第二图像。S304. The first device renders the second image.
在渲染完第一图像后,第一设备渲染第二图像。与此同时,第一设备将第一图像送显。步骤S303和步骤S304是同时发生的,即,第一设备将第一图像送显,与,第一设备渲染第二图像,是并行处理的。After rendering the first image, the first device renders the second image. At the same time, the first device sends the first image for display. Step S303 and step S304 occur simultaneously, that is, the first device sends the first image to display, and the first device renders the second image, which are processed in parallel.
S305,第一设备将第二图像发送至显示装置。S305. The first device sends the second image to the display device.
在渲染完第二图像后,第一设备将第二图像送显。如果显示装置包括帧缓冲区,本步骤可以是指第一设备将第二图像发送至帧缓冲区。After rendering the second image, the first device sends the second image for display. If the display device includes a frame buffer, this step may mean that the first device sends the second image to the frame buffer.
S306,第一设备获取第一图像和第二图像的图像数据。S306. The first device acquires image data of the first image and the second image.
S307,第一设备显示第三图像,第三图像包括第一图像和第二图像。S307. The first device displays a third image, where the third image includes the first image and the second image.
在一些实施例中,显示装置包括帧缓冲区,在第一图像和第二图像都送显完成后,第一设备等待插黑结束,获取帧缓冲区的第一图像和第二图像的图像像素数据,显示屏点亮,显示完整的第三图像,即第三图像包括第一图像和第二图像。In some embodiments, the display device includes a frame buffer. After the first image and the second image are sent for display, the first device waits for the end of the black insertion, and acquires the image pixels of the first image and the second image in the frame buffer. data, the display screen lights up to display the complete third image, that is, the third image includes the first image and the second image.
在另一些实施例中,显示装置没有帧缓冲区,第一设备接收并响应于第一图像的第一图像信号,直接先显示第一图像;第一设备接收并响应于第二图像的第二图像信号,再显示第二图像,此时第一图像和第二图像都完成显示,即完整的显示了第三图像。In some other embodiments, the display device has no frame buffer, and the first device receives and responds to the first image signal of the first image, and directly displays the first image; the first device receives and responds to the second image signal of the second image. The image signal is used to display the second image. At this time, both the first image and the second image are displayed, that is, the third image is completely displayed.
在本实施例中,是以第三图像分割为第一图像和第二图像两部分进行描述的,其他分割更多部分的实施例不再赘述。可以理解的是,不论第三图像被分割为几个部分,其并行处理的思想是一致的,在同一时间段,可以对不同部分图像的不同阶段(渲染、送显等)进行并行处理,从而降低图像传输及显示过程中的延迟时间,图像可以更快的显示。In this embodiment, the third image is divided into two parts, the first image and the second image, and other embodiments in which more parts are divided will not be repeated. It can be understood that no matter how many parts the third image is divided into, the idea of parallel processing is the same. In the same time period, different stages (rendering, displaying, etc.) of different parts of the image can be processed in parallel, thereby Reduce the delay time in the image transmission and display process, and the image can be displayed faster.
实施例四Embodiment four
实施例四提供了一种图像传输与显示方法,该方法用于第一设备进行显示,第一设备包括显示装置。该方法可以包括:在第一垂直同步信号和第二垂直同步信号之间,第一设备向显示装置传输第一图像信号。在第一垂直同步信号和第二垂直同步信号之间,第一设备向显示装置传输第二图像信号,第一图像信号与第二图像信号不同步。其示意过程如图22所示。 Embodiment 4 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device. The method may include: between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits the first image signal to the display device. Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, and the first image signal is not synchronized with the second image signal. Its schematic process is shown in Figure 22.
实施该实施例四,通过将一整副图像分割为多个部分,将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低 图像传输及显示的延迟时间,图像可以更快的显示。Implementing the fourth embodiment, by dividing a whole set of images into multiple parts, these multiple parts are sent to display in parallel, so that the display path is optimized, thereby reducing the waiting time in the process of image transmission and display, and further Reduce the delay time of image transmission and display, and the image can be displayed faster.
结合实施例四,在一些实施例中,第一设备向显示装置传输第一图像信号,还位于第一显示信号和第二显示信号之间。第一设备向显示装置传输第二图像信号,还位于第一显示信号和第二显示信号之间。With reference to Embodiment 4, in some embodiments, the first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal. The first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
结合实施例四,在一些实施例中,第一垂直同步信号和第二垂直同步信号的时间间隔为T1,第一显示信号和第二显示信号的时间间隔为T2,T1与T2相等。Referring to Embodiment 4, in some embodiments, the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1, the time interval between the first display signal and the second display signal is T2, and T1 and T2 are equal.
结合实施例四,在一些实施例中,在第一垂直同步信号和第二垂直同步信号之间,显示装置显示插黑图像帧,该插黑图像帧的周期为T3,T3小于T1。With reference to Embodiment 4, in some embodiments, between the first vertical synchronous signal and the second vertical synchronous signal, the display device displays a black-inserted image frame, and the period of the black-inserted image frame is T3, and T3 is shorter than T1.
结合实施例四,在一些实施例中,响应于第一垂直同步信号和第二垂直同步信号,显示装置开始显示插黑图像帧。响应于第一显示信号或第二显示信号,显示装置结束显示插黑图像帧,显示装置显示第三图像,第三图像包括第一图像和第二图像。With reference to the fourth embodiment, in some embodiments, in response to the first vertical synchronization signal and the second vertical synchronization signal, the display device starts to display the black-inserted image frame. In response to the first display signal or the second display signal, the display device finishes displaying the black-inserted image frame, and the display device displays a third image, where the third image includes the first image and the second image.
结合实施例四,在一些实施例中,第一图像信号是渲染后的第一图像的图像信号,第二图像信号是渲染后的第二图像的图像信号。即在分片送显第一图像和第二图像之前,还会对第一图像和第二图像分别进行分片渲染。其示意过程如图23所示。With reference to Embodiment 4, in some embodiments, the first image signal is an image signal of a rendered first image, and the second image signal is an image signal of a rendered second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
结合实施例四,在一些实施例中,当第一设备向显示装置传输第一图像信号时,第一设备对第二图像进行渲染。即第一图像的送显步骤与第二图像的渲染步骤并行进行。其中,第一时间为第一设备从开始渲染第一图像到将第二图像传输至显示装置结束所耗费的时间,第二时间为第一设备单独完成渲染第三图像及将第三图像传输至显示装置结束所耗费的时间,第三图像包括第一图像和第二图像,第一时间小于第二时间。其示意过程如图23所示。With reference to Embodiment 4, in some embodiments, when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image. Wherein, the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device, and the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently. The time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
结合实施例四,在一些实施例中,在第一设备向显示装置传输第一图像信号之前,第一设备对第一图像进行渲染。With reference to Embodiment 4, in some embodiments, before the first device transmits the first image signal to the display apparatus, the first device renders the first image.
结合实施例四,在一些实施例中,第一设备还包括图像渲染模块,图像渲染模块用于对第一图像和第二图像进行渲染,显示装置向图像渲染模块发送反馈信号,反馈信号用于指示显示装置的垂直同步信息。With reference to Embodiment 4, in some embodiments, the first device further includes an image rendering module, the image rendering module is used to render the first image and the second image, and the display device sends a feedback signal to the image rendering module, and the feedback signal is used for Indicates the vertical synchronization information of the display device.
结合实施例四,在一些实施例中,在第一设备对第一图像进行渲染之前,第一设备接收第二设备传输的第一图像。在第一设备对第二图像进行渲染之前,第一设备接收第二设备传输的第二图像。即在第一设备分片渲染第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。其示意过程如图24所示。With reference to Embodiment 4, in some embodiments, before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
结合实施例四,在一些实施例中,当第一设备对第一图像进行渲染时,第一设备接收第二设备传输的第二图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第四时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。其示意过程如图24所示。With reference to Embodiment 4, in some embodiments, when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device, and the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
结合实施例四,在一些实施例中,在第一设备向显示装置传输第一图像信号之前,第一设备接收第二设备传输的第一图像。在第一设备向显示装置传输第二图像信号之前,第一设备接收第二设备传输的第二图像。即在第一设备分片送显第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。其示意过程如图25所示。With reference to Embodiment 4, in some embodiments, before the first device transmits the first image signal to the display device, the first device receives the first image transmitted by the second device. Before the first device transmits the second image signal to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
结合实施例四,在一些实施例中,当第一设备向显示装置传输第一图像信号时,第一设备接收第二设备传输的第二图像。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第六时间为第一设备接收第二设备传输的第三图像到将第三图像传输 至显示装置结束所耗费的时间,第五时间小于第六时间。其示意过程如图25所示。With reference to Embodiment 4, in some embodiments, when the first device transmits the first image signal to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device, and the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
结合实施例四,在一些实施例中,第一图像和第二图像均由第二设备渲染。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。其示意过程如图26所示。With reference to Embodiment 4, in some embodiments, both the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
结合实施例四,在一些实施例中,当第二图像由第二设备渲染时,第一设备接收第二设备传输的第一图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。其示意过程如图26所示。With reference to Embodiment 4, in some embodiments, when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
结合实施例四,在一些实施例中,响应于第二显示信号,第一设备显示第三图像,第三图像包括第一图像和第二图像。With reference to Embodiment 4, in some embodiments, in response to the second display signal, the first device displays a third image, where the third image includes the first image and the second image.
结合实施例四,在一些实施例中,响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to Embodiment 4, in some embodiments, in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合实施例四,在一些实施例中,显示装置还包括帧缓冲区,帧缓冲区用于存储第一图像和第二图像的像素数据。With reference to the fourth embodiment, in some embodiments, the display device further includes a frame buffer for storing pixel data of the first image and the second image.
结合实施例四,在一些实施例中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,第一设备显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to Embodiment 4, in some embodiments, the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合实施例四,在一些实施例中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to Embodiment 4, in some embodiments, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
实施例五Embodiment five
实施例五提供了一种图像传输与显示方法,该方法用于第一设备进行显示,第一设备包括显示装置。该方法可以包括:第一设备向该显示装置传输第一图像,第一设备向显示装置传输第二图像,该显示装置显示第三图像,其中,第三图像包括第一图像和第二图像。其示意过程如图22所示。Embodiment 5 provides an image transmission and display method, the method is used for displaying by a first device, and the first device includes a display device. The method may include: the first device transmits the first image to the display device, the first device transmits the second image to the display device, and the display device displays a third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 22.
实施该实施例五,可以通过将一整副图像分割为多个部分,将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输及显示的延迟时间,图像可以更快的显示。By implementing the fifth embodiment, a whole image can be divided into multiple parts, and the multiple parts can be sent to display in parallel, so that the display path can be optimized, thereby reducing the waiting time in the process of image transmission and display, The delay time of image transmission and display is further reduced, and images can be displayed faster.
结合实施例五,在一些实施例中,在第一设备向显示装置传输第一图像之前,第一设备渲染第一图像。在第一设备向显示装置传输第二图像之前,第一设备渲染第二图像。即在分片送显第一图像和第二图像之前,还会对第一图像和第二图像分别进行分片渲染。其示意过程如图23所示。With reference to Embodiment 5, in some embodiments, before the first device transmits the first image to the display apparatus, the first device renders the first image. Before the first device transmits the second image to the display device, the first device renders the second image. That is, before sending and displaying the first image and the second image in slices, the first image and the second image will be respectively rendered in slices. Its schematic process is shown in Figure 23.
结合实施例五,在一些实施例中,当第一设备向显示装置传输第一图像信号时,第一设备渲染第二图像。即第一图像的送显步骤与第二图像的渲染步骤并行进行。其中,第一时间为第一设备从开始渲染第一图像到将第二图像传输至显示装置结束所耗费的时间,第二时间为第一设备单独完成渲染第三图像及将第三图像传输至显示装置结束所耗费的时间,第三图像包括第一图像和第二图像,第一时间小于第二时间。其示意过程如图23所示。With reference to Embodiment 5, in some embodiments, when the first device transmits the first image signal to the display device, the first device renders the second image. That is, the step of sending and displaying the first image is performed in parallel with the step of rendering the second image. Wherein, the first time is the time taken by the first device from the start of rendering the first image to the end of transmitting the second image to the display device, and the second time is the time it takes for the first device to render the third image and transmit the third image to the display device independently. The time taken by the display device to end, the third image includes the first image and the second image, and the first time is shorter than the second time. Its schematic process is shown in Figure 23.
结合实施例五,在一些实施例中,在第一设备渲染第一图像之前,第一设备接收第二设 备传输的第一图像。在第一设备渲染第二图像之前,第一设备接收第二设备传输的第二图像。即在第一设备分片渲染第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。其示意过程如图24所示。With reference to Embodiment 5, in some embodiments, before the first device renders the first image, the first device receives the first image transmitted by the second device. Before the first device renders the second image, the first device receives the second image transmitted by the second device. That is, before the first device renders the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 24.
结合实施例五,在一些实施例中,当第一设备渲染第一图像时,第一设备接收第二设备传输的第二图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第四时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。其示意过程如图24所示。With reference to Embodiment 5, in some embodiments, when the first device renders the first image, the first device receives the second image transmitted by the second device. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the first device from the start of receiving the first image transmitted by the second device to the end of transmitting the second image to the display device, and the fourth time is the time taken by the first device to receive the third image transmitted by the second device. The time taken from the image to the end of transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
结合实施例五,在一些实施例中,在第一设备向显示装置传输第一图像之前,第一设备接收第二设备传输的第一图像。在第一设备向显示装置传输第二图像之前,第一设备接收第二设备传输的第二图像。即在第一设备分片送显第一图像和第二图像之前,会接收第二设备向第一设备分片传输来的第一图像和第二图像。其示意过程如图25所示。With reference to Embodiment 5, in some embodiments, before the first device transmits the first image to the display apparatus, the first device receives the first image transmitted by the second device. Before the first device transmits the second image to the display device, the first device receives the second image transmitted by the second device. That is, before the first device sends and displays the first image and the second image in slices, it will receive the first image and the second image transmitted from the second device to the first device in slices. Its schematic process is shown in Figure 25.
结合实施例五,在一些实施例中,当第一设备向显示装置传输第一图像时,第一设备接收第二设备传输的第二图像。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第一设备从开始接收第二设备传输的第一图像到将第二图像传输至显示装置结束所耗费的时间,第六时间为第一设备接收第二设备传输的第三图像到将第三图像传输至显示装置结束所耗费的时间,第五时间小于第六时间。其示意过程如图25所示。With reference to Embodiment 5, in some embodiments, when the first device transmits the first image to the display device, the first device receives the second image transmitted by the second device. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the first device to receive the first image transmitted by the second device to the end of transmitting the second image to the display device, and the sixth time is the third time for the first device to receive the second image transmitted by the second device. The fifth time is less than the sixth time for the time taken from the image to the end of transmitting the third image to the display device. Its schematic process is shown in Figure 25.
结合实施例五,在一些实施例中,第一图像和第二图像由第二设备渲染。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。其示意过程如图26所示。With reference to the fifth embodiment, in some embodiments, the first image and the second image are rendered by the second device. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
结合实施例五,在一些实施例中,当第二图像由第二设备渲染时,第一设备接收第二设备传输的第一图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。其示意过程如图26所示。With reference to Embodiment 5, in some embodiments, when the second image is rendered by the second device, the first device receives the first image transmitted by the second device. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
结合实施例五,在一些实施例中,显示装置还包括帧缓冲区,帧缓冲区存储有第一图像和第二图像的像素数据。With reference to Embodiment 5, in some embodiments, the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
结合实施例五,在一些实施例中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,然后显示装置显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to the fifth embodiment, in some embodiments, the first device reads the pixel data of the first image and the second image in the frame buffer, and then the display device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合实施例五,在一些实施例中,该方法还包括:响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to Embodiment 5, in some embodiments, the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合实施例五,在一些实施例中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to Embodiment 5, in some embodiments, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
实施例六Embodiment six
实施例六提供了一种图像传输与显示方法,该方法可以包括:第一设备与第二设备建立连接。第二设备通过该连接将第一图像传输给第一设备,第二设备通过该连接将第二图像传输给第一设备。第一设备包括显示装置,第一设备将第一图像传输给显示装置,第一设备将 第二图像传输给显示装置。显示装置显示第三图像,其中,第三图像包括第一图像和第二图像。其示意过程如图25所示。Embodiment 6 provides an image transmission and display method, which may include: establishing a connection between a first device and a second device. The second device transmits the first image to the first device via the connection, and the second device transmits the second image to the first device via the connection. The first device includes a display device, the first device transmits the first image to the display device, and the first device transmits the second image to the display device. The display device displays the third image, wherein the third image includes the first image and the second image. Its schematic process is shown in Figure 25.
实施该实施例六,第二设备通过将一整副图像分割为多个部分,然后并行传输给第一设备,第一设备再将这多个部分进行并行图像送显的处理,使得显示通路得到优化,从而减少图像传输与显示过程中的等待时间,进一步降低图像传输及显示的延迟时间,图像可以更快的显示。Implementing the sixth embodiment, the second device divides a whole set of images into multiple parts, and then transmits them to the first device in parallel, and the first device then performs the processing of sending the multiple parts of the images in parallel, so that the display path is obtained. Optimization, so as to reduce the waiting time in the process of image transmission and display, further reduce the delay time of image transmission and display, and the image can be displayed faster.
结合实施例六,在一些实施例中,当第一设备将第一图像传输给显示装置时,第二设备通过连接将第二图像传输给第一设备。即第一图像的送显步骤与第二图像的传输步骤并行进行。其中,第五时间为第二设备从开始向第一设备传输第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第六时间为第二设备从开始向第一设备传输第三图像到第一设备将第三图像传输至显示装置结束所耗费的时间,第五时间小于第六时间。其示意过程如图25所示。With reference to Embodiment 6, in some embodiments, when the first device transmits the first image to the display apparatus, the second device transmits the second image to the first device through a connection. That is, the step of sending and displaying the first image is performed in parallel with the step of transmitting the second image. Wherein, the fifth time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device, and the sixth time is the time taken by the second device to transmit the second image to the first device. The fifth time is less than the sixth time for the time it takes for the third image to be transmitted to the first device to complete the transmission of the third image to the display device. Its schematic process is shown in Figure 25.
结合实施例六,在一些实施例中,在第二设备通过连接将第一图像传输给第一设备之前,第二设备渲染第一图像。在第二设备通过连接将第二图像传输给第一设备之前,第二设备渲染第二图像。即在第一设备接收第二设备分片传输第一图像和第二图像之前,第二设备分别对第一图像和第二图像进行分片渲染。其示意过程如图26所示。With reference to the sixth embodiment, in some embodiments, before the second device transmits the first image to the first device through the connection, the second device renders the first image. Before the second device transmits the second image to the first device over the connection, the second device renders the second image. That is, before the first device receives and transmits the first image and the second image in slices from the second device, the second device performs slice rendering on the first image and the second image respectively. Its schematic process is shown in Figure 26.
结合实施例六,在一些实施例中,当第一设备接收第二设备传输的第一图像时,第二设备渲染第二图像。即第一图像的传输步骤与第二图像的渲染步骤并行进行。其中,第七时间为从第二设备开始渲染第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第八时间为第二设备开始渲染第三图像到第一设备将第三图像传输至显示装置所耗费的时间,第七时间小于第八时间。其示意过程如图26所示。With reference to the sixth embodiment, in some embodiments, when the first device receives the first image transmitted by the second device, the second device renders the second image. That is, the step of transmitting the first image is performed in parallel with the step of rendering the second image. Wherein, the seventh time is the time taken from when the second device starts rendering the first image to when the first device finishes transmitting the second image to the display device, and the eighth time is from when the second device starts rendering the third image to when the first device ends The time taken for the third image to be transmitted to the display device, the seventh time is shorter than the eighth time. Its schematic process is shown in Figure 26.
结合实施例六,在一些实施例中,该方法还包括:在第二设备通过连接将第一图像传输给第一设备之后,第一设备将第一图像传输给显示装置之前,第一设备渲染第一图像。在第二设备通过连接将第二图像传输给第一设备之后,第一设备将第二图像传输给显示装置之前,第一设备渲染第二图像。其示意过程如图24所示。With reference to Embodiment 6, in some embodiments, the method further includes: after the second device transmits the first image to the first device through the connection and before the first device transmits the first image to the display device, the first device renders the first image. After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the first device renders the second image. Its schematic process is shown in Figure 24.
结合实施例六,在一些实施例中,当第二设备通过连接将第二图像传输给第一设备时,第一设备渲染第一图像。即第一图像的渲染步骤与第二图像的传输步骤并行进行。其中,第三时间为第二设备从开始向第一设备发送第一图像到第一设备将第二图像传输至显示装置结束所耗费的时间,第四时间为第二设备开始向第一设备传输第三图像到第一设备将第三图像传输至显示装置结束所耗费的时间,第三时间小于第四时间。其示意过程如图24所示。With reference to the sixth embodiment, in some embodiments, when the second device transmits the second image to the first device through the connection, the first device renders the first image. That is, the step of rendering the first image is performed in parallel with the step of transmitting the second image. Wherein, the third time is the time taken by the second device to transmit the first image to the first device to the end of the first device transmitting the second image to the display device, and the fourth time is the time when the second device starts to transmit the second image to the first device The time taken from the third image to the end of the first device transmitting the third image to the display device, the third time is less than the fourth time. Its schematic process is shown in Figure 24.
结合实施例六,在一些实施例中,显示装置还包括帧缓冲区,帧缓冲区存储有第一图像和第二图像的像素数据。With reference to Embodiment 6, in some embodiments, the display device further includes a frame buffer, and the frame buffer stores pixel data of the first image and the second image.
结合实施例六,在一些实施例中,第一设备读取帧缓冲区中的第一图像和第二图像的像素数据,第一设备显示第三图像。即在获取到第一图像和第二图像的像素数据之后,显示装置刷新显示一整帧第三图像。With reference to Embodiment 6, in some embodiments, the first device reads the pixel data of the first image and the second image in the frame buffer, and the first device displays the third image. That is, after obtaining the pixel data of the first image and the second image, the display device refreshes and displays a whole frame of the third image.
结合实施例六,在一些实施例中,该方法还包括:响应于第一图像信号,第一设备显示第一图像。响应于第二图像信号,第一设备显示第二图像。With reference to the sixth embodiment, in some embodiments, the method further includes: in response to the first image signal, the first device displays the first image. In response to the second image signal, the first device displays the second image.
结合实施例六,在一些实施例中,第一设备读取帧缓冲区中的第一图像的像素数据,第一设备显示第一图像。第一设备读取帧缓冲区中的第二图像的像素数据,第一设备显示第二图像。即第一设备获取到第一图像的像素数据之后,就显示第一图像,获取的第二图像的像素数据之后,再显示第二图像。第一图像和第二图像组成完整的一帧第三图像。With reference to Embodiment 6, in some embodiments, the first device reads pixel data of the first image in the frame buffer, and the first device displays the first image. The first device reads the pixel data of the second image in the frame buffer, and the first device displays the second image. That is, after the first device obtains the pixel data of the first image, it displays the first image, and after obtaining the pixel data of the second image, it displays the second image. The first image and the second image form a complete frame of the third image.
本申请实施例还提供了一种电子设备,该电子设备可以包括:通信装置、显示装置、存储器以及耦合于存储器的处理器,多个应用程序,以及一个或多个程序。存储器中存储有计算机可执行指令,显示装置用于显示图像,处理器执行指令时使得电子设备可以实现如实施例四或实施例五中第一设备所具有的任一功能。The embodiment of the present application also provides an electronic device, which may include: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs. Computer-executable instructions are stored in the memory, and the display device is used to display images. When the processor executes the instructions, the electronic device can realize any function of the first device in Embodiment 4 or Embodiment 5.
本申请实施例还提供了一种计算机存储介质,该存储介质中存储有计算机程序,该计算机程序包括可执行指令,该可执行指令当被处理器执行时使该处理器执行如实施例四或实施例五所提供的方法对应的操作。The embodiment of the present application also provides a computer storage medium, where a computer program is stored in the storage medium, and the computer program includes executable instructions. When the executable instructions are executed by a processor, the processor executes the method described in Embodiment 4 or Operations corresponding to the method provided in Embodiment 5.
本申请实施例还提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如实施例四或实施例五中任一可能的实现方式。An embodiment of the present application further provides a computer program product, which, when the computer program product is run on an electronic device, causes the electronic device to execute any possible implementation manner in Embodiment 4 or Embodiment 5.
本申请实施例还提供了一种芯片系统,该芯片系统可以应用于电子设备,该芯片包括一个或多个处理器,处理器用于调用计算机指令以使得电子设备实现如实施例四或实施例五中任一可能的实现方式。The embodiment of the present application also provides a chip system, which can be applied to electronic devices, and the chip includes one or more processors, and the processors are used to call computer instructions to make the electronic device implement the fourth or fifth embodiment. any of the possible implementations.
本申请实施例还提供了一种通信系统,该通信系统包括第一设备和第二设备,该第一设备可以实现如实施例四或实施例五中第一设备所具有的一些功能。The embodiment of the present application also provides a communication system, the communication system includes a first device and a second device, and the first device can realize some functions of the first device in the fourth or fifth embodiment.
实施本申请提供的上述方法,可以降低图像传输及显示过程中的延迟时间,图像可以更快的显示。进一步的,在AR/VR或MR场景下,可以降低MTP时延,有效缓解用户眩晕、呕吐等晕动症的不良反应,提升用户使用体验。可以理解的,本申请提供的方法可适用于更多的场景中,例如PC投屏显示图像、车载设备显示图像等场景,通过分片传输、渲染及送显的并行处理方式,加快端到端的显示过程,减少时延,提升用户的观看体验。By implementing the above method provided in the present application, the delay time in the process of image transmission and display can be reduced, and the image can be displayed faster. Furthermore, in AR/VR or MR scenarios, it can reduce the MTP delay, effectively alleviate the adverse reactions of motion sickness such as dizziness and vomiting, and improve the user experience. It can be understood that the method provided by this application can be applied to more scenarios, such as scenarios such as PC screen display images, vehicle-mounted device display images, etc., through the parallel processing of fragmented transmission, rendering, and display delivery, to speed up end-to-end Display process, reduce delay, and improve user viewing experience.
上述各实施例所描述的实现方式仅为示例性说明,并不对本申请其他实施例构成任何限制。具体内部实现方式可能根据电子设备类型不同、所搭载的操作系统的不同、所使用的程序、所调用的接口的不同而不同,本申请实施例不作任何限制,可以实现本申请实施例所描述的特征功能即可。The implementation manners described in the foregoing embodiments are only exemplary illustrations, and do not constitute any limitation to other embodiments of the present application. The specific internal implementation may vary depending on the type of electronic device, the operating system installed, the program used, and the interface called. feature functions.
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。As used in the above embodiments, depending on the context, the term "when" may be interpreted to mean "if" or "after" or "in response to determining..." or "in response to detecting...". Similarly, depending on the context, the phrases "in determining" or "if detected (a stated condition or event)" may be interpreted to mean "if determining..." or "in response to determining..." or "on detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)".
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments are realized. The processes can be completed by computer programs to instruct related hardware. The programs can be stored in computer-readable storage media. When the programs are executed , may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。The above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still apply to the foregoing embodiments Modifications are made to the recorded technical solutions, or equivalent replacements are made to some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of each embodiment of the application.

Claims (32)

  1. 一种图像传输与显示方法,其特征在于,所述方法用于第一设备进行显示,所述第一设备包括显示装置,所述方法包括:An image transmission and display method, characterized in that the method is used for displaying by a first device, the first device includes a display device, and the method includes:
    在第一垂直同步信号和第二垂直同步信号之间,所述第一设备向所述显示装置传输第一图像信号;Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a first image signal to the display device;
    在第一垂直同步信号和第二垂直同步信号之间,所述第一设备向所述显示装置传输第二图像信号,所述第一图像信号与所述第二图像信号不同步。Between the first vertical synchronization signal and the second vertical synchronization signal, the first device transmits a second image signal to the display device, the first image signal being asynchronous to the second image signal.
  2. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that,
    所述第一设备向所述显示装置传输所述第一图像信号,还位于第一显示信号和第二显示信号之间;The first device transmits the first image signal to the display device, and is also located between the first display signal and the second display signal;
    所述第一设备向所述显示装置传输所述第二图像信号,还位于所述第一显示信号和所述第二显示信号之间。The first device transmits the second image signal to the display device and is also located between the first display signal and the second display signal.
  3. 根据权利要求2所述的方法,其特征在于,所述第一垂直同步信号和所述第二垂直同步信号的时间间隔为T1,所述第一显示信号和所述第二显示信号的时间间隔为T2,T1与T2相等。The method according to claim 2, wherein the time interval between the first vertical synchronization signal and the second vertical synchronization signal is T1, and the time interval between the first display signal and the second display signal is For T2, T1 is equal to T2.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一图像信号是渲染后的第一图像的图像信号,所述第二图像信号是渲染后的第二图像的图像信号。The method according to any one of claims 1-3, wherein the first image signal is an image signal of a rendered first image, and the second image signal is an image signal of a rendered second image. image signal.
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, characterized in that the method further comprises:
    当所述第一设备向所述显示装置传输第一图像信号时,所述第一设备对所述第二图像进行渲染。When the first device transmits a first image signal to the display device, the first device renders the second image.
  6. 根据权利要求4或5所述的方法,其特征在于,还包括:The method according to claim 4 or 5, further comprising:
    在所述第一设备向所述显示装置传输第一图像信号之前,所述第一设备对所述第一图像进行渲染。Before the first device transmits the first image signal to the display device, the first device renders the first image.
  7. 根据权利要求4-6中任一项所述的方法,其特征在于,所述第一设备还包括图像渲染模块,所述图像渲染模块用于对所述第一图像和所述第二图像进行渲染,所述显示装置向所述图像渲染模块发送反馈信号,所述反馈信号用于指示所述显示装置的垂直同步信息。The method according to any one of claims 4-6, wherein the first device further comprises an image rendering module, and the image rendering module is configured to process the first image and the second image For rendering, the display device sends a feedback signal to the image rendering module, where the feedback signal is used to indicate vertical synchronization information of the display device.
  8. 根据权利要求6或7所述的方法,其特征在于,The method according to claim 6 or 7, characterized in that,
    在所述第一设备对所述第一图像进行渲染之前,所述方法还包括:Before the first device renders the first image, the method further includes:
    所述第一设备接收所述第二设备传输的所述第一图像;The first device receives the first image transmitted by the second device;
    在所述第一设备对所述第二图像进行渲染之前,所述方法还包括:Before the first device renders the second image, the method further includes:
    所述第一设备接收所述第二设备传输的所述第二图像。The first device receives the second image transmitted by the second device.
  9. 根据权利要求8所述的方法,其特征在于,所述第一设备接收所述第二设备传输的所 述第二图像,具体包括:The method according to claim 8, wherein the first device receiving the second image transmitted by the second device specifically comprises:
    当所述第一设备对所述第一图像进行渲染时,所述第一设备接收所述第二设备传输的所述第二图像。When the first device renders the first image, the first device receives the second image transmitted by the second device.
  10. 根据权利要求1-3中任一项所述的方法,其特征在于,The method according to any one of claims 1-3, characterized in that,
    在所述第一设备向所述显示装置传输第一图像信号之前,所述方法还包括:Before the first device transmits the first image signal to the display device, the method further includes:
    所述第一设备接收所述第二设备传输的第一图像;The first device receives the first image transmitted by the second device;
    在所述第一设备向所述显示装置传输第二图像信号之前,所述方法还包括:Before the first device transmits the second image signal to the display device, the method further includes:
    所述第一设备接收所述第二设备传输的所述第二图像。The first device receives the second image transmitted by the second device.
  11. 根据权利要求10所述的方法,其特征在于,所述第一设备接收所述第二设备传输的所述第二图像,具体包括:The method according to claim 10, wherein the first device receiving the second image transmitted by the second device specifically comprises:
    当所述第一设备向所述显示装置传输第一图像信号时,所述第一设备接收所述第二设备传输的所述第二图像。When the first device transmits a first image signal to the display device, the first device receives the second image transmitted by the second device.
  12. 根据权利要求10或11所述的方法,其特征在于,所述第一图像和所述第二图像均由所述第二设备渲染。The method of claim 10 or 11, wherein both the first image and the second image are rendered by the second device.
  13. 根据权利要求12所述的方法,其特征在于,所述第一设备接收所述第二设备传输的所述第一图像,具体包括:The method according to claim 12, wherein the first device receiving the first image transmitted by the second device specifically comprises:
    当所述第二图像由所述第二设备渲染时,所述第一设备接收所述第二设备传输的所述第一图像。When the second image is rendered by the second device, the first device receives the first image transmitted by the second device.
  14. 根据权利要求2-13中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 2-13, wherein the method further comprises:
    响应于所述第二显示信号,所述第一设备显示第三图像,所述第三图像包括所述第一图像和所述第二图像。In response to the second display signal, the first device displays a third image that includes the first image and the second image.
  15. 根据权利要求1-14中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-14, further comprising:
    响应于所述第一图像信号,所述第一设备显示所述第一图像;in response to the first image signal, the first device displays the first image;
    响应于所述第二图像信号,所述第一设备显示所述第二图像。In response to the second image signal, the first device displays the second image.
  16. 根据权利要求14所述的方法,其特征在于,所述显示装置还包括帧缓冲区,所述帧缓冲区用于存储所述第一图像和所述第二图像的像素数据;The method according to claim 14, wherein the display device further comprises a frame buffer for storing pixel data of the first image and the second image;
    所述第一设备显示所述第三图像,具体包括:The first device displaying the third image specifically includes:
    所述第一设备读取所述帧缓冲区中的所述第一图像和所述第二图像的像素数据;the first device reads the pixel data of the first image and the second image in the frame buffer;
    所述第一设备显示所述第三图像。The first device displays the third image.
  17. 根据权利要求15所述的方法,其特征在于,所述显示装置还包括帧缓冲区,所述帧缓冲区用于存储所述第一图像和所述第二图像的像素数据;The method according to claim 15, wherein the display device further comprises a frame buffer for storing pixel data of the first image and the second image;
    所述第一设备显示所述第一图像,具体包括:The first device displaying the first image specifically includes:
    所述第一设备读取所述帧缓冲区中的所述第一图像的像素数据;the first device reads pixel data of the first image in the frame buffer;
    所述第一设备显示所述第一图像;the first device displays the first image;
    所述第一设备显示所述第二图像,具体包括:The displaying of the second image by the first device specifically includes:
    所述第一设备读取所述帧缓冲区中的所述第二图像的像素数据;the first device reads pixel data of the second image in the frame buffer;
    所述第一设备显示所述第二图像。The first device displays the second image.
  18. 一种图像传输与显示方法,其特征在于,所述方法用于第一设备进行显示,所述第一设备包括显示装置,所述方法包括:An image transmission and display method, characterized in that the method is used for displaying by a first device, the first device includes a display device, and the method includes:
    第一设备向所述显示装置传输第一图像;the first device transmits a first image to the display device;
    所述第一设备向所述显示装置传输第二图像;the first device transmits a second image to the display device;
    所述显示装置显示第三图像,其中,所述第三图像包括所述第一图像和所述第二图像。The display device displays a third image, wherein the third image includes the first image and the second image.
  19. 根据权利要求18所述的方法,其特征在于,The method of claim 18, wherein,
    在所述第一设备向所述显示装置传输第一图像之前,所述方法还包括:Before the first device transmits the first image to the display device, the method further includes:
    所述第一设备渲染所述第一图像;the first device renders the first image;
    在所述第一设备向所述显示装置传输第二图像之前,所述方法还包括:Before the first device transmits the second image to the display device, the method further includes:
    所述第一设备渲染所述第二图像。The first device renders the second image.
  20. 根据权利要求19所述的方法,其特征在于,所述第一设备渲染所述第二图像,具体包括:The method according to claim 19, wherein rendering the second image by the first device specifically comprises:
    当所述第一设备向所述显示装置传输第一图像信号时,所述第一设备渲染所述第二图像。When the first device transmits a first image signal to the display device, the first device renders the second image.
  21. 根据权利要求20所述的方法,其特征在于,The method of claim 20, wherein
    在所述第一设备渲染所述第一图像之前,所述方法还包括:Before the first device renders the first image, the method further includes:
    所述第一设备接收所述第二设备传输的所述第一图像;The first device receives the first image transmitted by the second device;
    在所述第一设备渲染所述第二图像之前,所述方法还包括:Before the first device renders the second image, the method further includes:
    所述第一设备接收所述第二设备传输的所述第二图像。The first device receives the second image transmitted by the second device.
  22. 根据权利要求21所述的方法,其特征在于,所述第一设备接收所述第二设备传输的所述第二图像,具体包括:The method according to claim 21, wherein the first device receiving the second image transmitted by the second device specifically comprises:
    当所述第一设备渲染所述第一图像时,所述第一设备接收所述第二设备传输的所述第二图像。When the first device renders the first image, the first device receives the second image transmitted by the second device.
  23. 一种图像传输与显示方法,其特征在于,所述方法包括:An image transmission and display method, characterized in that the method comprises:
    所述第一设备与所述第二设备建立连接;establishing a connection between the first device and the second device;
    所述第二设备通过所述连接将所述第一图像传输给所述第一设备;the second device transmits the first image to the first device via the connection;
    所述第二设备通过所述连接将所述第二图像传输给所述第一设备;the second device transmits the second image to the first device via the connection;
    所述第一设备包括显示装置,所述第一设备将所述第一图像传输给所述显示装置;The first device includes a display device, the first device transmits the first image to the display device;
    所述第一设备将所述第二图像传输给所述显示装置;the first device transmits the second image to the display device;
    所述显示装置显示第三图像,其中,所述第三图像包括所述第一图像和所述第二图像。The display device displays a third image, wherein the third image includes the first image and the second image.
  24. 根据权利要求23所述的方法,其特征在于,所述第二设备通过所述连接将所述第二 图像传输给所述第一设备,具体包括:The method according to claim 23, wherein the second device transmits the second image to the first device through the connection, specifically comprising:
    当所述第一设备将所述第一图像传输给所述显示装置时,所述第二设备通过所述连接将所述第二图像传输给所述第一设备。When the first device transmits the first image to the display device, the second device transmits the second image to the first device through the connection.
  25. 根据权利要求23或24所述的方法,其特征在于,A method according to claim 23 or 24, characterized in that,
    在所述第二设备通过所述连接将所述第一图像传输给所述第一设备之前,所述方法还包括:Before the second device transmits the first image to the first device via the connection, the method further includes:
    所述第二设备渲染所述第一图像;the second device renders the first image;
    在所述第二设备通过所述连接将所述第二图像传输给所述第一设备之前,所述方法还包括:Before the second device transmits the second image to the first device via the connection, the method further includes:
    所述第二设备渲染所述第二图像。The second device renders the second image.
  26. 根据权利要求25所述的方法,其特征在于,所述第二设备渲染所述第二图像,具体包括:The method according to claim 25, wherein rendering the second image by the second device specifically comprises:
    当所述第一设备接收所述第二设备传输的所述第一图像时,所述第二设备渲染所述第二图像。When the first device receives the first image transmitted by the second device, the second device renders the second image.
  27. 根据权利要求23所述的方法,其特征在于,The method of claim 23, wherein,
    在所述第二设备通过所述连接将所述第一图像传输给所述第一设备之后,所述第一设备将所述第一图像传输给显示装置之前,所述方法还包括:After the second device transmits the first image to the first device through the connection and before the first device transmits the first image to a display device, the method further includes:
    所述第一设备渲染所述第一图像;the first device renders the first image;
    在所述第二设备通过所述连接将所述第二图像传输给所述第一设备之后,所述第一设备将所述第二图像传输给显示装置之前,所述方法还包括:After the second device transmits the second image to the first device through the connection and before the first device transmits the second image to the display device, the method further includes:
    所述第一设备渲染所述第二图像。The first device renders the second image.
  28. 根据权利要求27所述的方法,其特征在于,所述第一设备渲染所述第一图像,具体包括:The method according to claim 27, wherein rendering the first image by the first device specifically comprises:
    当所述第二设备通过所述连接将所述第二图像传输给所述第一设备时,所述第一设备渲染所述第一图像。When the second device transmits the second image to the first device via the connection, the first device renders the first image.
  29. 根据权利要求23-28中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 23-28, further comprising:
    响应于第一图像信号,所述第一设备显示所述第一图像;in response to a first image signal, the first device displays the first image;
    响应于第二图像信号,所述第一设备显示所述第二图像。In response to a second image signal, the first device displays the second image.
  30. 一种电子设备,其特征在于,所述电子设备包括:通信装置、显示装置、存储器以及耦合于所述存储器的处理器,多个应用程序,以及一个或多个程序;所述存储器中存储有计算机可执行指令,所述显示装置用于显示图像,所述处理器执行所述指令时使得所述电子设备实现如权利要求1至17或权利要求18至22中任一项所述的方法。An electronic device, characterized in that the electronic device includes: a communication device, a display device, a memory, a processor coupled to the memory, multiple application programs, and one or more programs; the memory stores Computer-executable instructions, the display device is used to display images, and the processor executes the instructions so that the electronic device implements the method according to any one of claims 1-17 or 18-22.
  31. 一种存储介质,所述存储介质中存储有计算机程序,所述计算机程序包括可执行指令,所述可执行指令当被处理器执行时使该处理器执行如权利要求1至17或权利要求18至 22中任一项所提供的方法对应的操作。A storage medium in which a computer program is stored, the computer program comprising executable instructions, which when executed by a processor causes the processor to perform the Operations corresponding to any of the methods provided in to 22.
  32. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1至17或权利要求18至22中任一项所述的方法。A computer program product containing instructions, characterized in that, when the computer program product is run on an electronic device, the electronic device is made to execute any one of claims 1 to 17 or claims 18 to 22. Methods.
PCT/CN2022/091722 2021-05-31 2022-05-09 Image transmission and display method and related device and system WO2022252924A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110602850.9 2021-05-31
CN202110602850.9A CN115480719A (en) 2021-05-31 2021-05-31 Image transmission and display method, related equipment and system

Publications (1)

Publication Number Publication Date
WO2022252924A1 true WO2022252924A1 (en) 2022-12-08

Family

ID=84323858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091722 WO2022252924A1 (en) 2021-05-31 2022-05-09 Image transmission and display method and related device and system

Country Status (2)

Country Link
CN (1) CN115480719A (en)
WO (1) WO2022252924A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708753A (en) * 2022-12-19 2023-09-05 荣耀终端有限公司 Method, device and storage medium for determining preview blocking reason
CN117726923A (en) * 2024-02-05 2024-03-19 河北凡谷科技有限公司 Image communication system based on specific model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106572353A (en) * 2016-10-21 2017-04-19 上海拆名晃信息科技有限公司 Wireless transmission method and wireless transmission device for virtual reality, terminal, and head-mounted display equipment
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109920040A (en) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 Show scene process method and apparatus, storage medium
CN111586391A (en) * 2020-05-07 2020-08-25 中国联合网络通信集团有限公司 Image processing method, device and system
CN112104855A (en) * 2020-09-17 2020-12-18 联想(北京)有限公司 Image processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106572353A (en) * 2016-10-21 2017-04-19 上海拆名晃信息科技有限公司 Wireless transmission method and wireless transmission device for virtual reality, terminal, and head-mounted display equipment
US20180164586A1 (en) * 2016-12-12 2018-06-14 Samsung Electronics Co., Ltd. Methods and devices for processing motion-based image
CN109920040A (en) * 2019-03-01 2019-06-21 京东方科技集团股份有限公司 Show scene process method and apparatus, storage medium
CN111586391A (en) * 2020-05-07 2020-08-25 中国联合网络通信集团有限公司 Image processing method, device and system
CN112104855A (en) * 2020-09-17 2020-12-18 联想(北京)有限公司 Image processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708753A (en) * 2022-12-19 2023-09-05 荣耀终端有限公司 Method, device and storage medium for determining preview blocking reason
CN116708753B (en) * 2022-12-19 2024-04-12 荣耀终端有限公司 Method, device and storage medium for determining preview blocking reason
CN117726923A (en) * 2024-02-05 2024-03-19 河北凡谷科技有限公司 Image communication system based on specific model
CN117726923B (en) * 2024-02-05 2024-05-14 河北凡谷科技有限公司 Image communication system based on specific model

Also Published As

Publication number Publication date
CN115480719A (en) 2022-12-16

Similar Documents

Publication Publication Date Title
CN110199267B (en) Miss-free cache structure for real-time image conversion with data compression
CN110221432B (en) Image display method and device of head-mounted display
US10692274B2 (en) Image processing apparatus and method
US10212428B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US9858637B1 (en) Systems and methods for reducing motion-to-photon latency and memory bandwidth in a virtual reality system
CN110249317B (en) Miss-free cache architecture for real-time image transformations
CN110494823B (en) Loss-free cache architecture for real-time image transformation using multiple LSR processing engines
WO2022252924A1 (en) Image transmission and display method and related device and system
US20170150139A1 (en) Electronic device and method for displaying content according to display mode
WO2020093988A1 (en) Image processing method and electronic device
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
WO2021103990A1 (en) Display method, electronic device, and system
CN112004041B (en) Video recording method, device, terminal and storage medium
WO2021147465A1 (en) Image rendering method, electronic device, and system
WO2021013043A1 (en) Interactive method and apparatus in virtual reality scene
WO2023082980A1 (en) Display method and electronic device
WO2023001113A1 (en) Display method and electronic device
CN110494840A (en) Electronic equipment and screen picture display methods for electronic equipment
WO2023202445A1 (en) Demonstration system, method, graphical interface, and related apparatus
WO2023040775A1 (en) Preview method, electronic device, and system
CN116708696B (en) Video processing method and electronic equipment
WO2023179442A1 (en) 3d display method and apparatus
US20230262406A1 (en) Visual content presentation with viewer position-based audio
WO2022233256A1 (en) Display method and electronic device
KR20180087643A (en) Electronic apparatus and controlling method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22814971

Country of ref document: EP

Kind code of ref document: A1