CN106998409B - Image processing method, head-mounted display and rendering equipment - Google Patents

Image processing method, head-mounted display and rendering equipment Download PDF

Info

Publication number
CN106998409B
CN106998409B CN201710169082.6A CN201710169082A CN106998409B CN 106998409 B CN106998409 B CN 106998409B CN 201710169082 A CN201710169082 A CN 201710169082A CN 106998409 B CN106998409 B CN 106998409B
Authority
CN
China
Prior art keywords
information
time
moment
head
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710169082.6A
Other languages
Chinese (zh)
Other versions
CN106998409A (en
Inventor
郑方舟
高剑
罗毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710169082.6A priority Critical patent/CN106998409B/en
Publication of CN106998409A publication Critical patent/CN106998409A/en
Priority to PCT/CN2018/078131 priority patent/WO2018171421A1/en
Application granted granted Critical
Publication of CN106998409B publication Critical patent/CN106998409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the invention discloses an image processing method, a head-mounted display and rendering equipment. One of the methods comprises: sending posture information of the head-mounted display at a first moment to a rendering device; receiving video data from rendering equipment, wherein the video data comprises image data obtained by rendering according to the attitude information; acquiring posture information of the head-mounted display at a second moment, wherein the second moment is later than the first moment; and adjusting the image indicated by the image data according to the information difference between the attitude information at the second time and the attitude information at the first time, thereby obtaining an adjusted image for display. By adopting the embodiment of the invention, the rendering image data obtained by the rendering equipment according to the posture information at the first moment is adjusted through the HMD, and the posture information at the second moment for adjusting processing is not required to be sent to the rendering equipment, so that the adjusted image data can be matched with head rotation, and the picture drawing delay caused by scene change due to the head rotation is eliminated.

Description

Image processing method, head-mounted display and rendering equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, a head mounted display, and a rendering device.
Background
Virtual Reality (VR) technology is a multimedia technology emerging in recent years. It is a computer simulation system that can create and experience virtual worlds. In a VR scenario, a user may have an interactive immersive experience by wearing a Head Mounted Display (HMD) that integrates a graphics system, an optical system, and a gesture tracking system.
In a VR video transmission scene, wireless transmission is mostly adopted. Currently, as shown in fig. 1, a wireless VR video transmission process includes acquiring pose information by an HMD, wirelessly transmitting the pose information to a rendering device (AP), and rendering corresponding image data/video data by the AP according to the received pose information and wirelessly transmitting the image data/video data back to the HMD, so that the HMD displays the image data/video data. When the AP is rendering, if rendering performance of a certain frame of data is insufficient (e.g., rendering time is too long), the next frame of data is discarded, so that the HMD may display the same image data as the previous frame of data. And once the head rotates, the light of the same image falls on different parts of the retina, thereby causing the picture to shake. At this time, the image of the frame is spatially shifted and distorted directly to match the head rotation by performing a secondary rendering process by a Graphics Processing Unit (GPU) on the AP side, thereby reducing the frame jitter.
However, during the secondary rendering, the image needs to be shifted according to the information difference between the pose information corresponding to the frame of image and the pose information acquired by the HMD at the current time to match the head rotation, that is, the pose information acquired at the current time needs to be wirelessly transmitted to the AP side, and there is a transmission delay in the transmission process of the pose information, so that the adjusted image still cannot match the head rotation, and therefore, the picture drawing delay caused by scene change due to the head rotation cannot be eliminated.
Disclosure of Invention
An embodiment of the present invention is to provide an image processing method, a head mounted display, and a rendering device, where rendered image data obtained by the rendering device according to pose information at a first time is adjusted by an HMD, and pose information at a second time for adjustment processing does not need to be sent to the rendering device, so that transmission delay of the pose information is reduced, adjusted image data can be matched with head rotation, and picture drawing delay caused by scene change due to head rotation is eliminated.
A first aspect of the present invention provides an image processing method, including: the head-mounted display sends attitude information of the head-mounted display at a first moment to a rendering device and receives video data from the rendering device, wherein the video data comprises image data obtained by rendering according to the attitude information at the first moment, then the attitude information of the head-mounted display at a second moment is obtained, the second moment is later than the first moment, and an image indicated by the image data is adjusted according to an information difference between the attitude information at the second moment and the attitude information at the first moment, so that an adjusted image for display is obtained.
In the first aspect of the embodiment of the present invention, the HMD is used to adjust the rendering image data obtained by the rendering device according to the pose information at the first time, and the pose information at the second time for adjustment processing is not required to be sent to the rendering device, so that the transmission delay of the pose information is reduced, the adjusted image data can be matched with the head rotation, and the picture drawing delay caused by the scene change due to the head rotation is eliminated.
With reference to the first aspect, in a first implementation manner of the first aspect, the method further includes: the head-mounted display sends first time information used for indicating a first time to the rendering device, wherein the first time information is bound with the posture information of the first time; when the video data further comprises first time information bound with the video data, the head-mounted display acquires the posture information bound with the first time information at the first time and adjusts the image indicated by the image data according to the information difference between the posture information at the second time and the posture information at the first time.
With reference to the first aspect or the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the posture information at the first time includes a posture angle at the first time, and the posture information at the second time includes a posture angle at the second time.
With reference to the first aspect or the first implementation manner of the first aspect or the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the adjusting the image indicated by the image data according to an information difference between the pose information at the second time and the pose information at the first time to obtain an adjusted image includes: and multiplying the transformation matrix corresponding to the information difference with the image indicated by the image information to obtain the adjusted image.
With reference to the first aspect, or the first implementation manner of the first aspect, or the second implementation manner of the first aspect, or the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the method further includes: the head-mounted display acquires attitude information of the head-mounted display at a third time, the third time is later than the first time, the second time and the third time are not the same time, and an image indicated by the image data is adjusted according to an information difference between the attitude information at the third time and the attitude information at the first time, so that another adjusted image for display is obtained. And performing two different displacement adjustments on the rendered frame of image data through the information difference between the attitude information at the third moment and the attitude information at the first moment and the information difference between the attitude information at the second moment and the attitude information at the first moment, so that the frame interpolation processing of the image data is realized, the rendering frame rate of the image data is reduced, and the transmission bandwidth of the wireless image data is reduced.
A second aspect of the present invention provides an image processing method, including: the rendering device receives posture information at a first moment from the head-mounted display and first moment information bound with the posture information at the first moment, where the first moment information is used to indicate the first moment, and then renders the image data according to the posture information at the first moment, specifically, renders the panoramic video/image according to the posture information at the first moment, so as to obtain the image data. And finally, sending the image data and the first time information bound with the image data to the head-mounted display for the head-mounted display equipment to adjust the image data.
In the second aspect of the embodiment of the present invention, after receiving the posture information of the head-mounted display, the rendering device renders the video data/image data according to the posture information, and sends the video data/image data to the head-mounted display, so that the head-mounted display adjusts and displays the video data/image data, thereby enabling the displayed image data to match with the head rotation, and eliminating the picture drawing delay caused by the scene change due to the head rotation.
A third aspect of the invention provides a head mounted display comprising: the wireless transmission unit is used for sending the attitude information of the head-mounted display at a first moment to the rendering equipment; the wireless transmission unit is further used for receiving video data from the rendering device, wherein the video data comprises image data obtained by rendering according to the attitude information; the motion sensor unit is used for acquiring the posture information of the head-mounted display at a second moment, and the second moment is later than the first moment; and the image processing unit is used for adjusting the image indicated by the image data received by the wireless transmission unit according to the information difference between the attitude information of the second moment and the attitude information of the first moment acquired by the motion sensor unit, so as to obtain an adjusted image for display.
In the third aspect of the embodiment of the present invention, the HMD is used to adjust the rendered image data obtained by the rendering device according to the pose information at the first time, and the pose information at the second time for adjustment processing is not required to be sent to the rendering device, so that the transmission delay of the pose information is reduced, the adjusted image data can be matched with the head rotation, and the picture drawing delay caused by the scene change due to the head rotation is eliminated.
With reference to the third aspect, in a first implementation manner of the third aspect, the head mounted display further includes: the wireless transmission unit is further used for sending first time information used for indicating the first time to the rendering device, and the first time information is bound with the attitude information of the first time; the video data further comprises the first time information bound with the video data; the image processing unit is specifically configured to: and adjusting the image indicated by the image data received by the wireless transmission unit according to the information difference between the attitude information at the second moment and the attitude information at the first moment acquired by the motion sensor unit.
With reference to the third aspect, in a second implementation manner of the third aspect, the attitude information at the first time includes an attitude angle at the first time, and the attitude information at the second time includes an attitude angle at the second time.
With reference to the second implementation manner of the third aspect, in a third implementation manner of the third aspect, the image processing unit is specifically configured to: and multiplying the transformation matrix corresponding to the information difference with the image indicated by the image information to obtain the adjusted image.
With reference to the third implementation manner of the third aspect, in a fourth implementation manner of the third aspect, the head-mounted display further includes: the motion sensor unit is further configured to acquire posture information of the head-mounted display at a third time, where the third time is later than the first time, and the second time and the third time are not the same time; the image processing unit is further configured to adjust an image indicated by the image data received by the wireless transmission unit according to an information difference between the posture information at the third time and the posture information at the first time, which is acquired by the motion sensor unit, so as to obtain another adjusted image for display. And performing two different displacement adjustments on the rendered frame of image data through the information difference between the attitude information at the third moment and the attitude information at the first moment and the information difference between the attitude information at the second moment and the attitude information at the first moment, so that the frame interpolation processing of the image data is realized, the rendering frame rate of the image data is reduced, and the transmission bandwidth of the wireless image data is reduced.
A fourth aspect of the present invention provides a rendering apparatus comprising: the wireless transmission unit is used for receiving attitude information at a first moment from a head-mounted display and first moment information bound with the attitude information at the first moment, wherein the first moment information is used for indicating the first moment; the rendering display unit is used for rendering to obtain image data according to the attitude information at the first moment received by the wireless transmission unit; the wireless transmission unit is further configured to send the image data obtained by the rendering and displaying unit and the first time information bound with the image data and received by the wireless transmission unit to the head-mounted display.
In the fourth aspect of the embodiments of the present invention, after receiving the posture information of the head-mounted display, the rendering device renders the video data/image data according to the posture information, and sends the video data/image data to the head-mounted display, so that the head-mounted display adjusts and displays the video data/image data, thereby enabling the displayed image data to match with the head rotation, and eliminating the picture drawing delay caused by the scene change due to the head rotation.
In one possible design, the structure of the head mounted display includes a processor and a transceiver. The processor is configured to execute the image processing method provided by the first aspect of the embodiment of the present invention. Optionally, a memory may be included for storing application program code that enables the head-mounted display to perform the above method, and the processor is configured for executing the application program stored in the memory.
In one possible design, the rendering device may be configured to include a processor and a transceiver. The processor is configured to execute the image processing method provided by the second aspect of the embodiment of the present invention. Optionally, a memory may be further included, the memory being configured to store application program codes that support a rendering device to execute the above method, and the processor being configured to execute the application program stored in the memory.
By implementing the embodiment of the invention, the posture information of the head-mounted display at the first moment is acquired through the head-mounted display and is sent to the rendering equipment, the rendering equipment renders the posture information to obtain the video data/image data, and then the video data/image data is transmitted back to the head-mounted display, and after the posture information of the head-mounted display at the second moment is acquired through the head-mounted display, the received video data/image data is adjusted according to the information difference between the posture information at the second moment and the posture information at the first moment. The posture information at the second moment is not required to be sent to the rendering equipment, so that the transmission delay of the posture information is reduced, the adjusted image data can be matched with the head rotation, and the picture drawing delay caused by scene change due to the head rotation is eliminated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present invention, the drawings required to be used in the embodiments or the background art of the present invention will be described below.
Fig. 1 is a schematic diagram of a logical structure of a wireless VR video transmission according to an embodiment of the present invention;
FIG. 2(a) is an interface diagram of an ATW rendering operation according to an embodiment of the present invention;
fig. 2(b) is a schematic diagram of an interface for sending and receiving attitude information according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 4 is an interface schematic diagram of an attitude angle description provided by an embodiment of the invention;
FIG. 5 is a schematic interface diagram of an image adjustment operation according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a head-mounted display according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a rendering apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a head-mounted display according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a rendering apparatus according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can include at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The HMD in the embodiment of the invention can be display equipment such as VR glasses and eye tracker and is used for collecting information, calculating, processing and displaying; the rendering device in the embodiment of the present invention may be an application processor providing virtual reality computing processing, and may also be other types of computing terminals (Proxy), such as a mobile phone (cellular phone), a smart phone (smartphone), a computer (computer), a tablet computer (tablet computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), and the like.
The WIreless transmission mode mentioned in the embodiment of the present invention may be Zigbee protocol (Zigbee), Bluetooth (Bluetooth), WIreless Fidelity (WIFI), Infrared Data Association (IrDA), Ultra WideBand (Ultra WideBand), Near Field Communication (NFC), WiMedia, Global Positioning System (GPS), WIreless 139, dedicated WIreless System, or the like.
The technical scheme of the embodiment of the invention is suitable for various communication systems, such as a Global System for Mobile communication (GSM) System, a Code Division Multiple Access (CDMA) System, a Wideband Code Division Multiple Access (WCDMA) System, a Long-Term Evolution (LTE) System, an MIMO System and the like.
An application scenario of the embodiment of the present invention is described below. In a VR scenario, limited by the existing mobile terminal processor capability, some large-scale games or video applications cannot complete VR processing by placing a processing unit on a head-mounted display, so a solution for a part of VR applications that require a lot of computation or energy saving is to place a computing unit on a desktop, a mobile phone or a game console, connect the head-mounted display by wire, reduce the weight of the head-mounted display, and bring a very good audio-visual experience to a user. However, this kind of split VR system requires wired connection, which makes it very inconvenient for users to move, and therefore, a wireless transmission mode is proposed to solve the trouble caused by wired connection. As shown in fig. 1, an HMD acquires pose information and transmits the pose information to a rendering device (e.g., an AP) in a wireless manner, and the AP receives the pose information, renders corresponding video data/image data, and transmits the video data/image data back to the HMD in a wireless manner, so that the HMD displays a video indicated by the video data. In a VR scene, the image update must be synchronized with the Vsync signal, but if rendering performance of a certain frame is insufficient (e.g., rendering time is too long), the next frame is lost, so that the HMD will display the same image as the previous frame. And once the head rotates, the light of the same image falls on different parts of the retina, thereby causing the picture to shake. In order to reduce the image jitter, at this time, the GPU on the AP side performs a secondary rendering process, and as shown in fig. 2(a), L1, L2, and L3 are continuous three-frame image data for the left eye, R1, R2, and R3 are continuous three-frame image data for the right eye, and normally, left-eye image data and right-eye image data of the same frame are rendered first and then right-eye image data is rendered, and the left-eye image data and right-eye image data of the same frame can be used as one-frame image data. In the processing of the first frame, the left-eye image data and the right-eye image data are rendered in time and are directly sent to the HMD for displaying, while in the processing of the second frame, the rendering cannot be completed in time, and at the moment, if nothing is done, the picture jitter occurs, and the picture jitter is reduced by performing spatial shift and distortion on the second frame image at the AP side so as to match the head rotation. However, during the secondary rendering, the pose information corresponding to the frame of image needs to be shifted according to the information difference between the pose information and the pose information acquired by the HMD at the current time to match the head rotation, that is, the pose information acquired at the current time needs to be wirelessly transmitted to the AP, and there is a transmission delay in the transmission process of the pose information, as shown in fig. 2(b), if the pose information acquired by the HMD at the time T0, T1, T2, and T3 is MU0, MU1, MU2, and MU3, respectively, since there is a transmission delay in the transmission of the MU from the HMD to the AP, the AP receives the MU0, MU1, and MU2 at the time T1, T2, and T3. Taking MU0 as an example, if the AP renders according to MU0 at time T1 and obtains rendered image data a at time T2 (rendering also has a time delay), since a cannot match the head motion, it needs to render a twice, that is, it needs to solve the information difference between the attitude information MU2 collected by the HMD at time T2 and the attitude information MU0 corresponding to a, and in fact, the attitude information received by the AP at time T2 is MU1, and at this time, the AP solves the information difference between MU1 and MU0, so that although the image jitter is reduced, the image rendering time delay caused by the image jitter still cannot be eliminated.
In the embodiment of the invention, the posture information of the head-mounted display at the first moment is acquired through the head-mounted display and is sent to the rendering device, the rendering device renders the posture information to obtain video data/image data, and then the video data/image data are transmitted back to the head-mounted display, and after the posture information of the head-mounted display at the second moment is acquired through the head-mounted display, the received video data/image data are adjusted according to the information difference between the posture information at the second moment and the posture information at the first moment. The posture information at the second moment is not required to be sent to the rendering equipment, so that the transmission delay of the posture information is reduced, the adjusted image data can be matched with the head rotation, and the picture drawing delay caused by scene change due to the head rotation is eliminated.
The image processing method is described in detail below by the embodiments shown in fig. 3 to 9.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image processing method according to an embodiment of the present invention. The following description takes the head-mounted display and two sides of the rendering device as examples, and the method includes:
step S101: the head mounted display sends posture information of the head mounted display at a first moment to a rendering device.
Specifically, the pose information is pose information of the head-mounted display device corresponding to a part of the image currently seen by the human eye in the panoramic image of the VR scene. Taking FOV (field angle) as an example, the panoramic image is horizontal 360 ° FOV and vertical 180 ° FOV, and the FOV in the VR scene is usually horizontal 90 ° and vertical 90 °, i.e. the image visible to human eyes in the VR scene is only 1/8(90 × 90/360/180) in the panoramic image, and this 1/8 image is also changed during rotation of the head-mounted display. That is, the displayed 1/8 image is in one-to-one correspondence with the pose information of the head-mounted display device. Wherein the attitude information at the first time comprises an attitude angle at the first time.
Generally, attitude information of an object may be measured by an Inertial Measurement Unit (IMU) sensor. The IMU sensor comprises three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, the gyroscopes detect angular velocity signals of the carrier relative to the carrier coordinate system, and the attitude of the object is calculated by measuring the angular velocity and the acceleration of the object in a three-dimensional space. The calculation method may be a preset algorithm, and is not limited specifically.
In specific implementation, the attitude information of the equipment can be directly acquired by arranging the IMU sensor in the equipment. For example, a BMI160 IMU is integrated on an Intel chip intel Curie, and a 16-bit extremely low-power-consumption three-axis accelerometer and a three-axis gyroscope are packaged, so that the module of the intel Curie can be directly adopted to measure and calculate own attitude information. In the embodiment of the invention, the IMU sensor is arranged in the head-mounted display, the attitude information can be acquired through the head-mounted display, and the acquired attitude information is sent to the rendering equipment.
Optionally, the measured pose information is changed from moment to moment as the head-mounted display is rotated.
In one embodiment, the head mounted display further performs step S1011: and the head-mounted display sends first time information used for indicating the first time to a rendering device, wherein the first time information is bound with the posture information of the first time.
Specifically, the time information may be a timestamp. The timestamp (timestamp) is typically a sequence of characters that uniquely identifies the time of a certain time instant. Because the head-mounted display collects the posture information as a continuous real-time process, the posture information can be uniquely identified through the timestamp.
In the embodiment of the present invention, time information indicating a first time is taken as the first time information, such as a first time stamp. Because the head-mounted display continuously collects the attitude information according to the collection rate, the target attitude information is identified by adding a timestamp to the collected attitude information.
In one embodiment, the head mounted display further performs step S1012: and the head-mounted display adds the acquired attitude information and time information indicating each attitude information to a cache.
Specifically, the head-mounted display caches the acquired posture information and the corresponding time information, and may cache the acquired posture information and the corresponding time information in the form of a mapping table, as shown in table 1. Table 1 is a table of correspondence between the posture information and the timestamp in the cache, and the corresponding posture information can be found by the acquisition time displayed by the timestamp. And when the timestamp of the target posture information received by the head-mounted display is time 1, searching and determining the corresponding target posture information to be the posture information 1 in the cache.
TABLE 1
Attitude information Time stamp
Attitude information
1 Time 1
Attitude information 2 Time 2
Attitude information 3 Time 3
Posture informationMessage 4 Time 4
Step S102: the rendering device receives pose information from the head mounted display at a first time.
Further, the rendering apparatus further performs step S1021: the rendering device receives first time information bound with the attitude information at the first time.
Step S103: and rendering the image data by the rendering equipment according to the attitude information at the first moment.
Specifically, the image data is rendered according to the pose information at the first time, and it can be understood that the rendering device renders the panoramic video/image according to the pose information at the first time, so as to generate the video data/image data. Wherein the rendering (Render) refers to a process of adding the property and the method to the corresponding component and then adding the component to the corresponding container. For example, in ps (adobe photoshop), rendering refers to a process of adding attributes such as color, size, etc. to a current canvas, where the canvas can be regarded as a container, and there are many small components inside, and each component has its own attribute.
In specific implementation, the rendering device renders images corresponding to the left and right eye viewpoints according to the posture information acquired by the IMU sensor. The rendering sequence may be that the left-eye image is rendered first, and then the right-eye image is rendered, and the rendering is performed in the same manner.
In one embodiment, since the head-mounted display continuously collects the pose information in real time and the rendering device renders according to its own rendering frame rate, the rendering device renders the plurality of pose information to generate rendered video data, where the rendered video data includes the multi-frame image data.
For example, the rate of acquiring the pose information by the head-mounted display is 1000/sec, and the rendering frame rate of the rendering device is 100HZ, then within the 1 second, the rendering device selects 100 pose information from the received 1000 pose information for rendering, thereby generating rendered video data, and the generated rendered video data includes 100 frames of image data. The posture information selection mode may be to randomly select 1 from every 10 pieces of posture information.
Optionally, after step S103, step S1031 is further performed: and the rendering equipment binds the image data with the first time information.
In one embodiment, the rendering device binds corresponding time information to each frame of image data in the generated rendered video. The time information is the time information of the posture information corresponding to each frame of image data. For example, the generated rendered video includes 5 frames of image data, the 5 frames of image data are generated according to the pose information 1, the pose information 2, the pose information 5, the pose information 8, and the pose information 10 among the 10 pose information of the received pose information 1-pose information 10, and the time information corresponding to the pose information 1, the pose information 2, the pose information 5, the pose information 8, and the pose information 10 is T1, T2, T5, T8, and T10, respectively, so that the 5 frames of image data are bound with the 5 time information of T1, T2, T5, T8, and T10, respectively, to identify the pose information corresponding to the image data.
Step S104: the rendering device sends the image data and first time information bound with the image data to the head-mounted display.
Step S105: the head-mounted display receives video data from the rendering device, wherein the video data comprises image data rendered according to the attitude information.
In one embodiment, after the step S105, a step S1051 is further performed: a head-mounted display receives first time instant information bound with image data in the video data.
Specifically, the video data includes at least one frame of image data, the image data for adjustment in the at least one frame of image data is used as target image data, and the posture information corresponding to the target image data is target posture information.
The obtaining of the target posture information may be obtaining time information carried by the target posture information, such as a timestamp, and then searching for the target posture information corresponding to the timestamp in a cache.
Step S106: and the head-mounted display acquires the posture information of the head-mounted display at a second moment, wherein the second moment is later than the first moment.
Specifically, because the head-mounted display continuously collects the posture information of the head-mounted display, the posture information is identified through the time information. Wherein the attitude information at the second time comprises an attitude angle at the second time.
Wherein, the attitude angle of the object in the three-dimensional space can be represented by a rotation matrix, an Euler angle, four elements and the like. Taking the euler angles as an example, for a reference frame in three-dimensional space, the orientation of any coordinate system can be represented by three euler angles. Wherein, three euler angles are respectively composed of a nutation angle a, a precession angle (namely precession angle) b and a rotation angle r, such as: (a, r, b).
For example, as shown in fig. 4, XYZ is a coordinate axis of the reference system (coordinate axis of the posture information before the head rotation), XYZ is a coordinate axis of the sample posture information after the head rotation, and an intersection line of the XY plane and the XY plane is a point of intersection and is indicated by an english letter (N). Then, the euler angle of zxz compliance may be defined as: a is an angle between the X-axis and the intersection line, b is an angle between the Z-axis and the Z-axis, and r is an angle between the intersection line and the X-axis, wherein a is α, b is β, and r is γ. That is, α, β, γ are sample attitude angles after the head is rotated.
Step S107: and the head-mounted display adjusts the image indicated by the image data according to the information difference between the posture information at the second moment and the posture information at the first moment, so as to obtain an adjusted image for display.
Specifically, the head-mounted display acquires attitude information at a first time bound with the first time information, and adjusts an image indicated by the image data according to an information difference between the attitude information at a second time and the attitude information at the first time.
In a specific implementation, when the attitude information includes an attitude angle, a transformation matrix corresponding to an information difference is multiplied by an image indicated by the image information, so as to obtain the adjusted image. Wherein the information difference is the posture difference of the head-mounted display at two different moments. In a possible embodiment, the attitude angle at the second time is subtracted from the attitude angle at the first time, so as to obtain the information difference between the two times. As shown in fig. 4, when XYZ is taken as the coordinate axis of the posture information at the first time, XYZ is taken as the coordinate axis of the posture information at the second time, and XYZ is taken as the reference axis, then α, β, γ are information differences between the two times.
When the xyz coordinate axis is taken as a reference axis and θ (α ═ θ, β ═ θ, and γ ═ θ) is rotated around the x, y, and z axes, the matrix is:
Figure BDA0001250624250000081
then, the transformation matrix of the corresponding euler angle is:
Figure BDA0001250624250000082
the image information indicates the image corresponding to the source matrix as
Figure BDA0001250624250000083
Where xy represents the coordinates of each pixel on the image indicated by the image information, that is, C includes matrices C1, C2 … Cn of a plurality of pixels, then C1, C2 … Cn are multiplied by Fix, respectively, to obtain processing matrices a1, a2 … An for each pixel, where:
Figure BDA0001250624250000091
each obtained An indicates each pixel point on the adjusted image, so that the adjusted image is obtained.
Optionally, the method further includes: acquiring posture information of the head-mounted display at a third moment, wherein the third moment is later than the first moment, and the second moment and the third moment are not the same moment;
and adjusting the image indicated by the image data according to the information difference between the attitude information at the third moment and the attitude information at the first moment, so as to obtain another adjusted image for display.
Specifically, the head-mounted display device adjusts the image data indicated by the information difference between the posture information at the third time and the posture information at the first time in the same adjustment manner as in step S107, and details are not repeated.
As shown in fig. 5, PicA and PicB are image data rendered by the rendering device, PicA0 and PicA1 are two-frame image data corresponding to PicA after the head-mounted display is adjusted, and PicB0 and PicB1 are two-frame image data corresponding to PicB after the head-mounted display is adjusted.
In the embodiment of the invention, the posture information of the head-mounted display at the first moment is acquired through the head-mounted display and is sent to the rendering device, the rendering device renders the posture information to obtain video data/image data, and then the video data/image data are transmitted back to the head-mounted display, and after the posture information of the head-mounted display at the second moment is acquired through the head-mounted display, the received video data/image data are adjusted according to the information difference between the posture information at the second moment and the posture information at the first moment. Without sending the pose information at the second time to the rendering device, the transmission delay of the pose information is reduced, so that the adjusted image data can match the head rotation, on one hand, the drawing delay of the picture due to the scene change caused by the head rotation is reduced, on the other hand, as shown in fig. 2(b), when the rendering device (such as AP) transmits the image data a1 generated by the secondary rendering to the head mounted display (such as HMD) at time T2 for displaying, the AP receives and displays the image data a1 at time T3 based on the transmission delay of the image data/video data, and a1 is generated according to the information difference between MU2 and MU0, and in fact, in order to eliminate the drawing delay of the picture, the image data displayed at this time should be obtained according to the information difference between MU3 and MU0, therefore, in the embodiment of the present invention, the HMD may adjust the rendered image data by using the pose information at the latest time, further matching the head rotation with the displayed image data. Therefore, by adopting the embodiment of the invention, the picture drawing delay caused by scene change caused by head rotation can be eliminated. In addition, through the information difference between the attitude information at the third time and the attitude information at the first time and the information difference between the attitude information at the second time and the attitude information at the first time, the rendered frame of the image data is subjected to two different displacement adjustments, so that the frame interpolation processing of the image data is realized, the rendering frame rate of the image data is reduced, and the transmission bandwidth of the image data is reduced.
The method of embodiments of the present invention is set forth above in detail and the apparatus of embodiments of the present invention is provided below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a head-mounted display 10 according to an embodiment of the present invention. As shown in fig. 6, the apparatus includes a wireless transmission unit 11, a motion sensor unit 12, and an image processing unit 13, wherein the respective units are described in detail as follows.
A wireless transmission unit 11, configured to send posture information of the head-mounted display at a first time to a rendering device;
the wireless transmission unit 11 is further configured to receive video data from the rendering device, where the video data includes image data rendered according to the posture information;
a motion sensor unit 12 configured to acquire posture information of the head-mounted display at a second time, the second time being later than the first time;
an image processing unit 13, configured to adjust an image indicated by the image data received by the wireless transmission unit 11 according to an information difference between the posture information at the second time and the posture information at the first time, which are acquired by the motion sensor unit 12, so as to obtain an adjusted image for display.
Optionally, the head-mounted display 10 further includes:
the wireless transmission unit 11 is further configured to send first time information indicating the first time to the rendering device, where the first time information is bound to the posture information of the first time;
the video data further comprises the first time information bound with the video data;
the motion sensor unit 12 is further configured to obtain the posture information at the first time bound with the first time information,
the image processing unit 13 is specifically configured to: and adjusting the image indicated by the image data received by the wireless transmission unit 11 according to the information difference between the attitude information at the second time and the attitude information at the first time, which is acquired by the motion sensor unit 12.
Optionally, the attitude information at the first time includes an attitude angle at the first time, and the attitude information at the second time includes an attitude angle at the second time.
Optionally, the image processing unit 13 is specifically configured to:
and multiplying the transformation matrix corresponding to the information difference with the image indicated by the image information to obtain the adjusted image.
Optionally, the head-mounted display 10 further comprises:
the motion sensor unit 12 is further configured to acquire posture information of the head-mounted display at a third time, where the third time is later than the first time, and the second time and the third time are not the same time;
the image processing unit 13 is further configured to adjust an image indicated by the image data received by the wireless transmission unit 11 according to an information difference between the posture information at the third time and the posture information at the first time, which is acquired by the motion sensor unit 12, so as to obtain another adjusted image for display.
In the embodiment of the invention, the posture information of the head-mounted display at the first moment is acquired through the head-mounted display and is sent to the rendering device, the rendering device renders the posture information to obtain video data/image data, and then the video data/image data are transmitted back to the head-mounted display, and after the posture information of the head-mounted display at the second moment is acquired through the head-mounted display, the received video data/image data are adjusted according to the information difference between the posture information at the second moment and the posture information at the first moment. Without sending the pose information at the second time to the rendering device, the transmission delay of the pose information is reduced, so that the adjusted image data can match the head rotation, on one hand, the drawing delay of the picture due to the scene change caused by the head rotation is reduced, on the other hand, as shown in fig. 2(b), when the rendering device (such as AP) transmits the image data a1 generated by the secondary rendering to the head mounted display (such as HMD) at time T2 for displaying, the AP receives and displays the image data a1 at time T3 based on the transmission delay of the image data/video data, and a1 is generated according to the information difference between MU2 and MU0, and in fact, in order to eliminate the drawing delay of the picture, the image data displayed at this time should be obtained according to the information difference between MU3 and MU0, therefore, in the embodiment of the present invention, the HMD may adjust the rendered image data by using the pose information at the latest time, further matching the head rotation with the displayed image data. Therefore, by adopting the embodiment of the invention, the picture drawing delay caused by scene change caused by head rotation can be eliminated. In addition, through the information difference between the attitude information at the third time and the attitude information at the first time and the information difference between the attitude information at the second time and the attitude information at the first time, the rendered frame of the image data is subjected to two different displacement adjustments, so that the frame interpolation processing of the image data is realized, the rendering frame rate of the image data is reduced, and the transmission bandwidth of the image data is reduced.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a rendering apparatus 20 according to an embodiment of the present invention. As shown in fig. 7, the apparatus includes a wireless transmission unit 21 and a rendering display unit 22, wherein each unit is described in detail as follows.
A wireless transmission unit 21 configured to receive posture information at a first time from a head mounted display and first time information bound with the posture information at the first time, the first time information indicating a first time;
a rendering display unit 22, configured to render to obtain image data according to the posture information at the first time received by the wireless transmission unit 21;
the wireless transmission unit 21 is further configured to send the image data obtained by the rendering and displaying unit 22 and the first time information bound with the image data and received by the wireless transmission unit 21 to the head-mounted display.
In the embodiment of the invention, the posture information of the head-mounted display at the first moment is acquired through the head-mounted display and is sent to the rendering device, the rendering device renders the posture information to obtain video data/image data, and then the video data/image data are transmitted back to the head-mounted display, and after the posture information of the head-mounted display at the second moment is acquired through the head-mounted display, the received video data/image data are adjusted according to the information difference between the posture information at the second moment and the posture information at the first moment. Without sending the pose information at the second time to the rendering device, the transmission delay of the pose information is reduced, so that the adjusted image data can match the head rotation, on one hand, the drawing delay of the picture due to the scene change caused by the head rotation is reduced, on the other hand, as shown in fig. 2(b), when the rendering device (such as AP) transmits the image data a1 generated by the secondary rendering to the head mounted display (such as HMD) at time T2 for displaying, the AP receives and displays the image data a1 at time T3 based on the transmission delay of the image data/video data, and a1 is generated according to the information difference between MU2 and MU0, and in fact, in order to eliminate the drawing delay of the picture, the image data displayed at this time should be obtained according to the information difference between MU3 and MU0, therefore, in the embodiment of the present invention, the HMD may adjust the rendered image data by using the pose information at the latest time, further matching the head rotation with the displayed image data. Therefore, by adopting the embodiment of the invention, the picture drawing delay caused by scene change caused by head rotation can be eliminated.
The head mounted display in the embodiment shown in fig. 8 may be implemented as the head mounted display shown in fig. 3 and 6. As shown in fig. 8, a schematic structural diagram of a head-mounted display is provided for an embodiment of the present invention, and the head-mounted display 1000 shown in fig. 8 includes: a processor 1001, a transceiver 1004, and a sensor 1005. Where the processor 1001 is coupled to the transceiver 1004, such as via a bus 1002. Optionally, the data transmitting apparatus 1000 may further include a memory 1003. In addition, the transceiver 1004 is not limited to one in practical application, and the structure of the data transmission device 1000 is not limited to the embodiment of the present invention.
The processor 1001 is applied to the embodiment of the present invention, and is used to implement the functions of the image processing unit 13 shown in fig. 6. The transceiver 1004 includes a transmitter and a receiver, and the transceiver 1004 is applied to the embodiment of the present invention to realize the functions of the wireless transmission unit 11 shown in fig. 6. The sensor 1005 includes a motion sensor, and the sensor 1005 is applied to the embodiment of the present invention for realizing the function of the motion sensor unit 12 shown in fig. 6.
Processor 1001 may be a Central Processing Unit (CPU), general purpose processor, Digital Signal Processing (DSP), Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other Programmable logic device, transistor logic, hardware components, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 1001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.
Bus 1002 may include a path that transfers information between the above components. The bus 1002 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 1002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The Memory 1003 may be, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Optionally, the memory 1003 is used for storing application program codes for implementing the present invention, and the processor 1001 controls the execution. The processor 1001 is configured to execute application code stored in the memory 1003 to implement the actions of the head mounted display provided by the embodiment shown in fig. 6.
In an embodiment of the present invention, a computer storage medium is further provided for storing computer software instructions for the head-mounted display, which includes a program designed for the head-mounted display to execute the above aspects.
The rendering apparatus in the embodiment shown in fig. 9 may be implemented with the rendering apparatus shown in fig. 7. As shown in fig. 9, a schematic structural diagram of a rendering apparatus is provided for an embodiment of the present invention, and the rendering apparatus 2000 shown in fig. 9 includes: a processor 2001 and a transceiver 2004. The processor 2001 is coupled to the transceiver 2004, such as via the bus 2002. Optionally, the rendering device 2000 may further include a memory 2003. It should be noted that the number of the transceivers 2004 is not limited to one in practical applications, and the structure of the data receiving apparatus 2000 is not limited to the embodiment of the present invention.
The processor 2001 is applied to the embodiment of the present invention, and is configured to implement the function of the rendering and displaying unit 22 shown in fig. 7. The transceiver 2004 includes a receiver and a transmitter, and the transceiver 2004 is applied to the embodiment of the present invention for realizing the function of the wireless transmission unit 21 shown in fig. 7.
The processor 2001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 2001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 2002 may include a path that conveys information between the aforementioned components. The bus 2002 may be a PCI bus or an EISA bus, etc. The bus 2002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The memory 2003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Optionally, the memory 2003 is used to store application program code for performing aspects of the present invention and is controlled in execution by the processor 2001. The processor 2001 is used to execute application program code stored in the memory 2003 to implement the actions of the rendering device provided by the embodiment shown in fig. 7.
In an embodiment of the present invention, a computer storage medium is provided for storing computer software instructions for the rendering apparatus, which includes a program designed for executing the above aspect for the rendering apparatus.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the invention has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method of image processing, the method being performed by a head mounted display, the method comprising:
sending pose information of the head-mounted display at a first moment to a rendering device;
receiving video data from the rendering device, wherein the video data comprises image data rendered according to the attitude information;
acquiring posture information of the head-mounted display at a second moment, wherein the second moment is later than the first moment;
adjusting the image indicated by the image data according to the information difference between the attitude information at the second moment and the attitude information at the first moment, so as to obtain an adjusted image for display;
acquiring posture information of the head-mounted display at a third moment, wherein the third moment is later than the first moment, and the second moment and the third moment are not the same moment;
adjusting an image indicated by the image data according to the information difference between the attitude information at the third moment and the attitude information at the first moment, so as to obtain another adjusted image for display;
the adjusting the image indicated by the image data according to the information difference between the posture information at the second time and the posture information at the first time to obtain an adjusted image includes:
and multiplying a transformation matrix corresponding to an information difference by the image indicated by the image data to obtain the adjusted image, wherein the attitude information comprises an attitude angle, and the information difference is a subtraction information difference between the attitude angle at the second moment and the attitude angle at the first moment.
2. The method of claim 1, wherein the method further comprises: sending first time information used for indicating the first time to the rendering equipment, wherein the first time information is bound with the attitude information of the first time;
the video data further comprises the first time information bound with the video data;
the adjusting the image indicated by the image data according to the information difference between the posture information at the second time and the posture information at the first time includes:
and acquiring the attitude information at the first moment bound with the first moment information, and adjusting the image indicated by the image data according to the information difference between the attitude information at the second moment and the attitude information at the first moment.
3. The method of claim 1 or 2, wherein the pose information at the first time comprises a pose angle at the first time and the pose information at the second time comprises a pose angle at the second time.
4. An image processing method performed by a rendering device, the method comprising:
receiving posture information at a first moment from a head-mounted display and first moment information bound with the posture information at the first moment, wherein the first moment information is used for indicating the first moment and is used for uniquely identifying the posture information at the first moment;
rendering to obtain image data according to the attitude information at the first moment;
and sending the image data and first time information bound with the image data to the head-mounted display, so that the head-mounted display makes two different displacement adjustments on the image data through an information difference between the posture information at a third time and the posture information at the first time and an information difference between the posture information at a second time and the posture information at the first time.
5. A head-mounted display, the head-mounted display comprising:
the wireless transmission unit is used for sending the attitude information of the head-mounted display at a first moment to the rendering equipment;
the wireless transmission unit is further used for receiving video data from the rendering device, wherein the video data comprises image data obtained by rendering according to the attitude information;
a motion sensor unit for acquiring posture information of the head-mounted display at a second time later than the first time, and for acquiring posture information of the head-mounted display at a third time later than the first time, the second time and the third time not being the same time;
an image processing unit, configured to adjust the image indicated by the image data received by the wireless transmission unit according to an information difference between the posture information at the second time and the posture information at the first time, which is acquired by the motion sensor unit, so as to obtain an adjusted image for display, and adjust the image indicated by the image data received by the wireless transmission unit according to an information difference between the posture information at the third time and the posture information at the first time, which is acquired by the motion sensor unit, so as to obtain another adjusted image for display;
the image processing unit is specifically configured to multiply a transformation matrix corresponding to the information difference with the image indicated by the image information, so as to obtain the adjusted image;
the attitude information includes an attitude angle, and the information difference is a subtraction information difference between the attitude angle at the second time and the attitude angle at the first time.
6. The head-mounted display of claim 5, wherein the head-mounted display further comprises:
the wireless transmission unit is further used for sending first time information used for indicating the first time to the rendering device, and the first time information is bound with the attitude information of the first time;
the video data further comprises the first time information bound with the video data;
the motion sensor unit is further configured to acquire the attitude information at the first time bound with the first time information,
the image processing unit is specifically configured to: and adjusting the image indicated by the image data received by the wireless transmission unit according to the information difference between the attitude information at the second moment and the attitude information at the first moment acquired by the motion sensor unit.
7. The head-mounted display of claim 5 or 6, wherein the first time-of-day pose information comprises the first time-of-day pose angle and the second time-of-day pose information comprises the second time-of-day pose angle.
8. A rendering apparatus, characterized in that the rendering apparatus comprises:
the wireless transmission unit is used for receiving attitude information at a first moment from a head-mounted display and first moment information bound with the attitude information at the first moment, wherein the first moment information is used for indicating the first moment and is used for uniquely identifying the attitude information at the first moment;
the rendering display unit is used for rendering to obtain image data according to the attitude information at the first moment received by the wireless transmission unit;
the wireless transmission unit is further configured to send the image data obtained by the rendering display unit and the first time information bound with the image data and received by the wireless transmission unit to the head-mounted display, so that the head-mounted display performs two different displacement adjustments on the image data through an information difference between the posture information at the third time and the posture information at the first time and an information difference between the posture information at the second time and the posture information at the first time.
CN201710169082.6A 2017-03-21 2017-03-21 Image processing method, head-mounted display and rendering equipment Active CN106998409B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710169082.6A CN106998409B (en) 2017-03-21 2017-03-21 Image processing method, head-mounted display and rendering equipment
PCT/CN2018/078131 WO2018171421A1 (en) 2017-03-21 2018-03-06 Image processing method, head mounted display, and rendering device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710169082.6A CN106998409B (en) 2017-03-21 2017-03-21 Image processing method, head-mounted display and rendering equipment

Publications (2)

Publication Number Publication Date
CN106998409A CN106998409A (en) 2017-08-01
CN106998409B true CN106998409B (en) 2020-11-27

Family

ID=59431689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710169082.6A Active CN106998409B (en) 2017-03-21 2017-03-21 Image processing method, head-mounted display and rendering equipment

Country Status (2)

Country Link
CN (1) CN106998409B (en)
WO (1) WO2018171421A1 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106998409B (en) * 2017-03-21 2020-11-27 华为技术有限公司 Image processing method, head-mounted display and rendering equipment
CN109698949B (en) * 2017-10-20 2020-08-21 腾讯科技(深圳)有限公司 Video processing method, device and system based on virtual reality scene
CN107835404A (en) * 2017-11-13 2018-03-23 歌尔科技有限公司 Method for displaying image, equipment and system based on wear-type virtual reality device
WO2019183914A1 (en) * 2018-03-30 2019-10-03 Intel Corporation Dynamic video encoding and view adaptation in wireless computing environments
CN108829627B (en) * 2018-05-30 2020-11-20 青岛小鸟看看科技有限公司 Synchronous control method and system between virtual reality devices
CN109271022B (en) * 2018-08-28 2021-06-11 北京七鑫易维信息技术有限公司 Display method and device of VR equipment, VR equipment and storage medium
CN110868581A (en) * 2018-08-28 2020-03-06 华为技术有限公司 Image display method, device and system
US11500455B2 (en) 2018-10-16 2022-11-15 Nolo Co., Ltd. Video streaming system, video streaming method and apparatus
CN111065053B (en) * 2018-10-16 2021-08-17 北京凌宇智控科技有限公司 System and method for video streaming
CN111064985A (en) * 2018-10-16 2020-04-24 北京凌宇智控科技有限公司 System, method and device for realizing video streaming
CN109743626B (en) * 2019-01-02 2022-08-12 京东方科技集团股份有限公司 Image display method, image processing method and related equipment
CN109725730B (en) 2019-01-02 2023-05-26 京东方科技集团股份有限公司 Head-mounted display device and driving method thereof, display system and driving method thereof
CN109782912B (en) * 2019-01-03 2022-10-25 京东方科技集团股份有限公司 Method, apparatus, medium, and electronic device for measuring device delay
CN109753158B (en) * 2019-01-11 2021-01-22 京东方科技集团股份有限公司 VR device delay determination method and control terminal
CN110244840A (en) * 2019-05-24 2019-09-17 华为技术有限公司 Image processing method, relevant device and computer storage medium
CN110351480B (en) * 2019-06-13 2021-01-15 歌尔光学科技有限公司 Image processing method and device for electronic equipment and electronic equipment
CN110505518A (en) * 2019-08-05 2019-11-26 青岛小鸟看看科技有限公司 One kind wearing display equipment, data transmission method and system
CN111338546A (en) * 2020-02-28 2020-06-26 歌尔科技有限公司 Method for controlling head-mounted display device, terminal and storage medium
CN111243027B (en) * 2020-02-28 2023-06-23 京东方科技集团股份有限公司 Delay measurement method, device and system
CN113728615A (en) * 2020-03-31 2021-11-30 深圳市大疆创新科技有限公司 Image processing method, image processing device, user equipment, aircraft and system
CN113589919A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Image processing method and device
CN111586391B (en) * 2020-05-07 2022-07-08 中国联合网络通信集团有限公司 Image processing method, device and system
CN112073632A (en) * 2020-08-11 2020-12-11 联想(北京)有限公司 Image processing method, apparatus and storage medium
CN111813230B (en) * 2020-09-14 2021-03-19 芋头科技(杭州)有限公司 Interaction method and device on AR glasses
CN112104855B (en) * 2020-09-17 2022-05-31 联想(北京)有限公司 Image processing method and device
CN113219668B (en) * 2021-05-19 2023-09-08 闪耀现实(无锡)科技有限公司 Method and device for refreshing screen of head-mounted display device and electronic device
CN113888685A (en) * 2021-09-29 2022-01-04 青岛歌尔声学科技有限公司 Control method of head-mounted equipment and picture rendering method
CN114168096B (en) * 2021-12-07 2023-07-25 深圳创维新世界科技有限公司 Display method and system of output picture, mobile terminal and storage medium
CN114979615A (en) * 2022-05-11 2022-08-30 闪耀现实(无锡)科技有限公司 Method and device for displaying picture on head-mounted display device and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847785A (en) * 2016-05-09 2016-08-10 上海乐相科技有限公司 Image processing method, device and system
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105807798B (en) * 2014-12-31 2018-11-30 上海乐相科技有限公司 A kind of head-wearing type intelligent glasses vibration control method and device
KR20160139461A (en) * 2015-05-27 2016-12-07 엘지전자 주식회사 Head mounted display and, the controlling method thereof
CN105809144B (en) * 2016-03-24 2019-03-08 重庆邮电大学 A kind of gesture recognition system and method using movement cutting
CN106354258B (en) * 2016-08-30 2019-04-05 上海乐相科技有限公司 A kind of picture display process and device of virtual reality device
CN106385625A (en) * 2016-09-29 2017-02-08 宇龙计算机通信科技(深圳)有限公司 Image intermediate frame generation method and device
CN106998409B (en) * 2017-03-21 2020-11-27 华为技术有限公司 Image processing method, head-mounted display and rendering equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847785A (en) * 2016-05-09 2016-08-10 上海乐相科技有限公司 Image processing method, device and system
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method

Also Published As

Publication number Publication date
WO2018171421A1 (en) 2018-09-27
CN106998409A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN106998409B (en) Image processing method, head-mounted display and rendering equipment
US11321870B2 (en) Camera attitude tracking method and apparatus, device, and system
RU2638776C1 (en) Image generating device and method
US10410562B2 (en) Image generating device and image generating method
CN109743626B (en) Image display method, image processing method and related equipment
JP2019028368A (en) Rendering device, head-mounted display, image transmission method, and image correction method
US20140192164A1 (en) System and method for determining depth information in augmented reality scene
US20200241731A1 (en) Virtual reality vr interface generation method and apparatus
EP3058716A1 (en) Refocusable images
US11727648B2 (en) Method and device for synchronizing augmented reality coordinate systems
JP6216398B2 (en) Image generating apparatus and image generating method
RU2723920C1 (en) Support of augmented reality software application
CN108882156A (en) A kind of method and device for calibrating locating base station coordinate system
KR20200073784A (en) Server, device and method for providing virtual reality service
CN109766006B (en) Virtual reality scene display method, device and equipment
KR20180076342A (en) Estimation system, estimation method, and estimation program
CN111885366A (en) Three-dimensional display method and device for virtual reality screen, storage medium and equipment
US9445015B2 (en) Methods and systems for adjusting sensor viewpoint to a virtual viewpoint
JP6487512B2 (en) Head mounted display and image generation method
WO2023108016A1 (en) Augmented reality using a split architecture
JP2021510442A (en) Augmented reality image provision method and program using depth data
CN109218252A (en) A kind of display methods of virtual reality, device and its equipment
CN111949114A (en) Image processing method and device and terminal
CN117768599A (en) Method, device, system, electronic equipment and storage medium for processing image
CN111213111A (en) Wearable device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant