WO2021147465A1 - Procédé de rendu d'image, dispositif électronique, et système - Google Patents

Procédé de rendu d'image, dispositif électronique, et système Download PDF

Info

Publication number
WO2021147465A1
WO2021147465A1 PCT/CN2020/127599 CN2020127599W WO2021147465A1 WO 2021147465 A1 WO2021147465 A1 WO 2021147465A1 CN 2020127599 W CN2020127599 W CN 2020127599W WO 2021147465 A1 WO2021147465 A1 WO 2021147465A1
Authority
WO
WIPO (PCT)
Prior art keywords
posture data
electronic device
head
data
posture
Prior art date
Application number
PCT/CN2020/127599
Other languages
English (en)
Chinese (zh)
Inventor
付钟奇
沈钢
姚建江
朱应成
单双
赖武军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021147465A1 publication Critical patent/WO2021147465A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Definitions

  • This application relates to the field of image processing technology, and in particular to an image rendering method, electronic device and system.
  • VR virtual reality
  • VR refers to a means of human-computer interaction with the help of computer and sensor technology.
  • Virtual reality is the use of computer simulation to generate a three-dimensional virtual world, providing users with simulations of vision, hearing, touch and other senses, so that users can observe things in three-dimensional space in a timely and unlimited manner as if they are on the scene.
  • Head-mounted devices based on VR technology have been widely used in education, entertainment, medical care, design and other fields.
  • the technology of portable head-mounted devices becomes more and more mature, it has become a reality for users to use portable head-mounted devices to enjoy the VR experience during the journey.
  • a head-mounted device is equipped with a motion sensor, such as an accelerometer, a gyroscope, etc., to collect its own posture data, and use its own posture data to be equivalent to the posture data of the user’s head for rendering the image displayed by the head-mounted device.
  • a motion sensor such as an accelerometer, a gyroscope, etc.
  • the motion state of the vehicle itself will affect the posture data measured by the head-mounted device, causing the display image of the head-mounted device to shift.
  • the user when the user does not actively rotate the head, that is, when the user's head does not move relative to the head-mounted device, it indicates that the user does not intend to change the image displayed by the head-mounted device.
  • the posture of the head-mounted device measured by the head-mounted device will change, thereby changing the displayed image. It can be seen that the image displayed by the head-mounted device at this time has changed, which is contrary to the real intention of the user.
  • the intuitive feeling for the user is that the image is offset, which reduces the user's immersive experience in using the head-mounted device, and it is also easy to cause the user dizziness.
  • the image rendering method, electronic device, and system provided by the present application can reduce the deviation of the image displayed by the head-mounted device caused by the movement posture of the vehicle, and improve the use experience of the head-mounted device.
  • an image rendering method is provided, which is applied to a system including an electronic device and a head-mounted device, and the method includes: the electronic device acquires first posture data of the electronic device; the head-mounted device Acquire second posture data of the head-mounted device, and send the second posture data to the electronic device; the electronic device receives the second posture data sent by the head-mounted device; the electronic device The device obtains third posture data according to the first posture data and the second posture data, where the third posture data is used to characterize the rotation posture of the user's head; the electronic device according to the third posture data, The three-dimensional scene data is processed to obtain a first image; the electronic device sends the first image to the head-mounted device; and the head-mounted device displays the first image.
  • the embodiment of the application obtains the posture data of the mobile phone in a stationary state relative to the vehicle, which is equivalent to the running posture data of the vehicle, and offsets the running posture data of the vehicle from the posture data of the head-mounted device. , So as to obtain the user's initiative to change the head posture data. Rendering the last displayed image of the head-mounted device according to the user's initiative to change the head posture data, consistent with the user's intention, avoiding the feeling of image offset to the user, and improving the user's VR experience.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the electronic device obtains third posture data according to the first posture data and the second posture data, including: if the electronic device determines that the first posture data meets the first Condition, the electronic device determines that the second posture data is the third posture data; if the electronic device determines that the first posture data does not satisfy the first condition, the electronic device determines The second posture data is offset by the first posture data to obtain the third posture data.
  • the first condition is used to characterize whether the electronic device has a large deflection. If a large deflection occurs, it is considered that the electronic device may be accidentally pulled or the vehicle has undergone a large change in direction, and the first attitude data cannot be offset from the second attitude data. That is, it is necessary to directly use the second posture data and the stored three-dimensional scene data to render the first image to avoid a large deflection of the image displayed by the VR glasses. If it is determined that the electronic device has not deflected significantly, it is considered that the electronic device is in a stable state, and the first attitude data can be used to adjust the second attitude data to obtain the third attitude data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the electronic device obtains third posture data according to the first posture data and the second posture data, and further includes: the electronic device judging whether the second posture data satisfies the first posture data Two conditions; if the second posture data meets the second condition, the electronic device determines that the second posture data is the third posture data; if the second posture data does not meet the second Condition, the electronic device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the second condition is used to characterize whether the head-mounted device is in a stable state, that is, the head-mounted device has no deflection or the deflection is small, which can be ignored. If the head-mounted device is in a stable state, the electronic device is basically in a stable state, then the value of the first posture data is zero or less, and the second posture data is not affected much, so you can skip the second posture data Cancel the first posture data. If the head-mounted device is in an unstable state, then the electronic device is basically in an unstable state. The second posture data of the electronic device has a larger value and has a greater impact on the second posture data. Then, you need to start from the second posture. The first attitude data is canceled out of the data.
  • the workload of calculating the third posture data can be reduced, and the processing rate of the electronic device can be increased.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the electronic device determining to offset the first posture data from the second posture data to obtain the third posture data includes: the electronic device determining the head-mounted device The Euler angle of the electronic device is subtracted from the Euler angle to obtain the third attitude data.
  • the corresponding posture data may also be selected according to the actual scene. It is understandable that when a user uses a head-mounted device, the user usually changes the screen displayed by the head-mounted device by turning the head left and right or moving the head up and down. Then, the attitude data can only consider the yaw angle and pitch angle. For another example, if the head-mounted device is set to only change the displayed image according to the user turning the head left and right, that is, when the user moves the head up and down without changing the image displayed by the head-mounted device, then the posture data can also only include Yaw angle. It is helpful to simplify the calculation process and improve the processing efficiency of electronic equipment.
  • the electronic device processes the three-dimensional scene data according to the third posture data to obtain the first image, including: the electronic device obtains the second image according to the three-dimensional scene data; The electronic device rotates the second image according to the rotation matrix in the third posture data to obtain the first image.
  • an image rendering method is characterized in that the method is applied to an electronic device, and the method includes: the electronic device acquires first posture data of the electronic device, and receives the head-mounted device sent by the head-mounted device. Second posture data of the wearable device; the electronic device obtains third posture data according to the first posture data and the second posture data; the third posture data is used to characterize the user's head rotation posture; The electronic device processes the three-dimensional scene data according to the third posture data to obtain a first image; the electronic device sends the first image to the head-mounted device.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the electronic device obtains third posture data according to the first posture data and the second posture data, including: if the electronic device determines that the first posture data meets the first Condition, the electronic device determines that the second posture data is the third posture data; if the electronic device determines that the first posture data does not satisfy the first condition, the electronic device determines The second posture data is offset by the first posture data to obtain the third posture data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the electronic device obtains third posture data according to the first posture data and the second posture data, and further includes: the electronic device judging whether the second posture data satisfies the first posture data Two conditions; if the second posture data meets the second condition, the electronic device determines that the second posture data is the third posture data; if the second posture data does not meet the second Condition, the electronic device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the electronic device determining to offset the first posture data from the second posture data to obtain the third posture data includes: the electronic device determining the head-mounted device The Euler angle of the electronic device is subtracted from the Euler angle to obtain the third attitude data.
  • the electronic device processes the three-dimensional scene data according to the third posture data to obtain the first image, including: the electronic device obtains the second image according to the three-dimensional scene data; The electronic device rotates the second image according to the rotation matrix in the third posture data to obtain the first image.
  • an image rendering method is provided, which is applied to a system including an electronic device and a head-mounted device, the method includes: the electronic device acquires first posture data of the electronic device, and the first posture Data is sent to the head-mounted device, and the head-mounted device acquires second posture data of the head-mounted device; the head-mounted device receives the first posture data sent by the electronic device;
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data; the third posture data is used to characterize the rotation posture of the user's head; the head-mounted device obtains third posture data according to The third posture data is to process three-dimensional scene data to obtain a first image; the head-mounted device displays the first image.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, including: if the head-mounted device determines the first posture If the data meets the first condition, the head-mounted device determines that the second posture data is the third posture data; if the head-mounted device determines that the first posture data does not meet the first condition, Then the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, and further includes: the head-mounted device judging the second posture Whether the data meets the second condition; if the second posture data meets the second condition, the head-mounted device determines that the second posture data is the third posture data; if the second posture data If the second condition is not met, the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the head-mounted device determining that the first posture data is offset from the second posture data to obtain the third posture data includes: the head-mounted device determining the head The Euler angle of the wearable device is subtracted from the Euler angle of the electronic device to obtain the third posture data.
  • the head-mounted device processes the three-dimensional scene data according to the third posture data to obtain the first image, including: the head-mounted device obtains the first image according to the three-dimensional scene data Two images; the head-mounted device rotates the second image according to the rotation matrix in the third posture data to obtain the first image.
  • an image rendering method characterized in that the method is applied to a head-mounted device, and the method includes: the head-mounted device obtains second posture data of the head-mounted device, and receives data sent by the electronic device The first posture data of the electronic device; the head-mounted device obtains third posture data according to the first posture data and the second posture data; the third posture data is used to characterize the head of the user Part rotation posture; the head-mounted device processes the three-dimensional scene data according to the third posture data to obtain a first image; the head-mounted device displays the first image.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, including: if the head-mounted device determines the first posture If the data meets the first condition, the head-mounted device determines that the second posture data is the third posture data; if the head-mounted device determines that the first posture data does not meet the first condition, Then the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, and further includes: the head-mounted device judging the second posture Whether the data meets the second condition; if the second posture data meets the second condition, the head-mounted device determines that the second posture data is the third posture data; if the second posture data If the second condition is not met, the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the head-mounted device determining that the first posture data is offset from the second posture data to obtain the third posture data includes: the head-mounted device determining the head The Euler angle of the wearable device is subtracted from the Euler angle of the electronic device to obtain the third posture data.
  • the head-mounted device processes the three-dimensional scene data according to the third posture data to obtain the first image, including: the head-mounted device obtains the first image according to the three-dimensional scene data Two images; the head-mounted device rotates the second image according to the rotation matrix in the third posture data to obtain the first image.
  • an electronic device is characterized by comprising: a processor, a memory, a communication interface, the memory and the communication interface are coupled to the processor, and the memory is used to store computer program code, the The computer program code includes computer instructions.
  • the processor reads the computer instructions from the memory, so that the electronic device performs the following operations: acquiring the first posture data of the electronic device, and receiving the headset The second posture data of the head-mounted device sent by the mobile device; the third posture data is obtained according to the first posture data and the second posture data; the third posture data is used to characterize the head of the user Rotation posture; processing three-dimensional scene data according to the third posture data to obtain at least one first image; sending the first image to the head-mounted device.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the obtaining third posture data according to the first posture data and the second posture data includes: if it is determined that the first posture data satisfies a first condition, determining the The second posture data is the third posture data; if it is determined that the first posture data does not satisfy the first condition, it is determined to offset the first posture data from the second posture data to obtain the first posture data.
  • Three posture data if it is determined that the first posture data satisfies a first condition, determining the The second posture data is the third posture data; if it is determined that the first posture data does not satisfy the first condition, it is determined to offset the first posture data from the second posture data to obtain the first posture data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the electronic device obtains third posture data according to the first posture data and the second posture data, and further includes: judging whether the second posture data meets a second condition; if If the second posture data meets the second condition, it is determined that the second posture data is the third posture data; if the second posture data does not meet the second condition, it is determined that the second posture data The first posture data is canceled out of the second posture data to obtain the third posture data.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the determining to offset the first posture data from the second posture data to obtain the third posture data includes: determining that the Euler angle of the head-mounted device is subtracted Obtain the third attitude data from the Euler angle of the electronic device.
  • the processing the three-dimensional scene data according to the third posture data to obtain the first image includes: obtaining a second image according to the three-dimensional scene data; and according to the third posture data The rotation matrix in rotates the second image to obtain the first image.
  • a head-mounted device is characterized by comprising: a processor, a memory, a communication interface, and a display screen, the memory, the communication interface, and the display screen are coupled to the processor, and the The memory is used to store computer program code, the computer program code includes computer instructions, when the processor reads the computer instructions from the memory, so that the electronic device performs the following operations: obtain the head-mounted
  • the second posture data of the device receives the first posture data of the electronic device sent by the electronic device; the third posture data is obtained according to the first posture data and the second posture data; the third posture data is used for To characterize the rotation posture of the user's head; process the three-dimensional scene data according to the third posture data to obtain a first image; and display the first image.
  • the first posture data includes at least one of Euler angles, a quaternion, and a rotation matrix of the electronic device
  • the second posture data includes the head-mounted device At least one of the Euler angle, quaternion, and rotation matrix.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, including: if the head-mounted device determines the first posture If the data meets the first condition, the head-mounted device determines that the second posture data is the third posture data; if the head-mounted device determines that the first posture data does not meet the first condition, Then the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the first posture data further includes the angular velocity of the electronic device, and the first condition includes that the angular velocity of the electronic device is greater than a first threshold.
  • the head-mounted device obtains third posture data according to the first posture data and the second posture data, and further includes: the head-mounted device judging the second posture Whether the data meets the second condition; if the second posture data meets the second condition, the head-mounted device determines that the second posture data is the third posture data; if the second posture data If the second condition is not met, the head-mounted device determines to offset the first posture data from the second posture data to obtain the third posture data.
  • the second posture data further includes the acceleration of the head-mounted device and/or the angular velocity of the head-mounted device
  • the second condition includes the acceleration of the head-mounted device Is less than or equal to the second threshold, and/or the angular velocity of the head-mounted device is less than or equal to the third threshold.
  • the head-mounted device determining that the first posture data is offset from the second posture data to obtain the third posture data includes: the head-mounted device determining the head The Euler angle of the wearable device is subtracted from the Euler angle of the electronic device to obtain the third posture data.
  • the head-mounted device processes the three-dimensional scene data according to the third posture data to obtain the first image, including: the head-mounted device obtains the first image according to the three-dimensional scene data Two images; the head-mounted device rotates the second image according to the rotation matrix in the third posture data to obtain the first image.
  • an apparatus in a seventh aspect, is provided, the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior of the electronic device in any one of the foregoing second aspect and the possible implementation manners in the second aspect.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, a sensor module or unit, a communication module or unit, and a processing module or unit.
  • the device may be a chip system.
  • An eighth aspect provides a device included in a head-mounted device, and the device has the function of the head-mounted device in any one of the foregoing fourth aspect and the possible implementation manners of the fourth aspect.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, a sensor module or unit, a communication module or unit, and a processing module or unit.
  • the device can be a chip system.
  • the device may be a chip system.
  • a computer-readable storage medium including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute as described in the above-mentioned second aspect and any one of the possible implementation manners of the second aspect The method described.
  • a computer-readable storage medium including computer instructions, which when the computer instructions run on a head-mounted device, cause the head-mounted device to perform any of the above-mentioned fourth and fourth aspects. Implement the method described in the mode.
  • the eleventh aspect provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in the above second aspect and any one of the possible implementations of the above second aspect, or Perform the method described in the foregoing fourth aspect and any one of the possible implementation manners of the foregoing fourth aspect.
  • a chip system including a processor, and when the processor executes an instruction, the processor executes the method described in the second aspect and any one of the possible implementations of the second aspect, or Perform the method described in the foregoing fourth aspect and any one of the possible implementation manners of the foregoing fourth aspect.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 2 is a schematic structural diagram of a mobile terminal provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram illustrating some posture data provided by an embodiment of the application.
  • 4A is a schematic diagram illustrating a user's head posture and a display image of a head-mounted device according to an embodiment of the application;
  • 4B is a schematic diagram illustrating another user's head posture and a display image of a head-mounted device according to an embodiment of the application;
  • 4C is a schematic diagram illustrating another user's head posture and a display image of a head-mounted device according to an embodiment of the application;
  • FIG. 5A is a schematic diagram illustrating the running posture of a vehicle and the posture of the user's head provided by an embodiment of this application;
  • FIG. 5B is a schematic diagram illustrating another vehicle running posture and the user's head posture according to an embodiment of this application.
  • FIG. 5C is a schematic diagram illustrating yet another vehicle running posture and the user's head posture according to an embodiment of this application.
  • 6A is a schematic diagram illustrating the display images of some head-mounted devices according to an embodiment of the application.
  • FIG. 6B is a schematic diagram illustrating the display images of still other head-mounted devices provided by an embodiment of the application.
  • FIG. 7A is a schematic diagram of a user interface of a head-mounted device provided by an embodiment of this application.
  • FIG. 7B is a schematic structural diagram of a handle provided by an embodiment of the application.
  • FIG. 7C is a schematic diagram of a user interface of another head-mounted device provided by an embodiment of the application.
  • FIG. 7D is a schematic diagram of a user interface of another head-mounted device provided by an embodiment of the application.
  • FIG. 8A is a schematic flowchart of an image rendering method provided by an embodiment of this application.
  • FIG. 8B is a schematic diagram of an image displayed using the method provided by an embodiment of the present application.
  • FIG. 8C is a schematic diagram of an image displayed in the prior art.
  • FIG. 8D is another schematic diagram of an image displayed by using the method provided by an embodiment of the present application.
  • FIG. 8E is a schematic diagram of another image displayed by using the prior art.
  • FIG. 9 is a schematic flowchart of another image rendering method provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of a chip system provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of another chip system provided by an embodiment of the application.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present application should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • A/B can mean A or B; "and/or” in this article is only an association relationship describing the associated objects, indicating that there can be three relationships, For example, A and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
  • the method provided in the embodiment of the present application can be applied to a head-mounted device.
  • FIG. 1 it is a schematic diagram of an application scenario provided by an embodiment of this application.
  • the method provided in the embodiments of the present application can be applied to a scene where a user uses the head-mounted device 100 to watch a video while riding in a vehicle (for example, an airplane, a car, a ship, a high-speed rail, a subway, a bicycle, etc.).
  • a vehicle for example, an airplane, a car, a ship, a high-speed rail, a subway, a bicycle, etc.
  • the head-mounted device 100 is configured with a motion sensor, such as an accelerometer, a gyroscope, and a magnetometer, for measuring the posture data of the head-mounted device 100. Then, the head-mounted device 100 processes the measured posture data of the head-mounted device 100 and the three-dimensional scene data to obtain the image to be displayed by the head-mounted device 100.
  • the three-dimensional scene data may be, for example, VR video, augmented reality (Augmented Reality, AR) video, mixed reality (Mixed Reality, MR) video, and the like.
  • the posture data of the head-mounted device 100 is equivalent to the posture of the user's head data. Therefore, rendering the image displayed by the head-mounted device 100 according to the posture data of the head-mounted device 100 is to render the image displayed by the head-mounted device 100 according to the posture data of the user's head.
  • the user can change the posture of the head-mounted device 100 by actively changing the posture of the head to change the image displayed by the head-mounted device 100.
  • the posture data measured by the head-mounted device 100 and the process of the head-mounted device 100 displaying images according to the posture of the user's head are described.
  • FIG. 3 it is a schematic diagram of a user wearing the head-mounted device 100.
  • the coordinate system of the head-mounted device 100 can be set as follows: in the direction of gravity, the vertical upward direction is the Y axis Positive direction. On the horizontal plane, the direction from the user's left to the right is the positive direction of the X axis.
  • the positive directions of the three coordinate axes are described here, but the set coordinate system will not follow the posture of the head-mounted device 100. Variety.
  • the three axes and the positive directions of the three axes set here are only examples, and the embodiment of the present application does not limit the specific settings of the coordinate axes.
  • Euler angles can be used to characterize the change of the posture of the head-mounted device 100.
  • the posture data of the head-mounted device 100 may include Euler angles.
  • Euler angle also called attitude angle
  • pitch angle pitch angle
  • yaw angle yaw
  • roll angle roll
  • the pitch angle is the angle at which the head-mounted device 100 rotates clockwise around the X axis.
  • the yaw angle is the angle at which the head mounted device 100 rotates clockwise around the Y axis.
  • Rolling Angle The angle at which the head mounted device 100 rotates clockwise around the Z axis.
  • the user Combined with the actual scene of the user using the head-mounted device 100, the user usually changes the image displayed by the head-mounted device 100 by turning the head left and right on a horizontal surface, or moving the head up and down on the horizontal surface. Therefore, in conjunction with FIGS. 3 to 4C, taking the user's left and right rotation of the head in the horizontal direction as an example, the image change of the head-mounted device 100 will be described. Since the user rotates the head left and right in the horizontal direction, the posture change of the head-mounted device 100 or the posture change of the user's head can be represented by the yaw angle.
  • the image 301 shown in FIG. 4A is an image corresponding to three-dimensional scene data. It should be noted that the image corresponding to the three-dimensional scene data is larger than the image displayed by the head-mounted device 100 at one time.
  • the head-mounted device 100 When a user wears the head-mounted device 100 and views the image in the head-mounted device 100 with his head facing forward, the yaw angle measured by the head-mounted device 100 is zero, then the head The image displayed by the wearable device 100 is the image within the marking frame 302 shown in FIG. 4A.
  • the yaw angle measured by the head-mounted device 100 is ⁇ (for example, 30 degrees), then according to the original three-dimensional scene data and ⁇ is rendered, and the image in the marker frame 303 shown in FIG. 4B is obtained and displayed. If the user's head rotates to the left by ⁇ (for example, 30 degrees) in the horizontal direction, the yaw angle measured by the head-mounted device 100 is - ⁇ (for example, -30 degrees), then the head-mounted device 100 The device 100 processes according to the original three-dimensional scene data and ⁇ , and obtains and displays the image in the marker frame 304 shown in FIG. 4C.
  • the running state of the vehicle will affect the posture of the user's head. That is to say, when the user is in a running vehicle, the actual head posture changes include the posture change caused by the user actively rotating the head, and the passive change of the user's head posture caused by the change in the posture of the vehicle.
  • the actual posture change of the head is the posture change of the head-mounted device 100.
  • the posture change caused by the user's active head rotation is also called the active change of the user's head.
  • the user expresses that he wants to change the image displayed by the head-mounted device 100 by actively rotating his head.
  • the active change of the user's head is the user's intention to change the image displayed by the head-mounted device 100.
  • the image rendered only based on the posture data and the three-dimensional scene data measured by the head-mounted device 100 will give the user a sense of offset, which is inconsistent with the user's real intention.
  • FIG. 5A it is a schematic diagram of a posture of a vehicle and a posture of a user's head.
  • 51 is a vehicle
  • 52 is a user's head.
  • the yaw angle of the vehicle is zero, that is, the yaw angle of the passive change of the user's head posture is zero.
  • the user also does not actively turn the head, that is, the yaw angle of the active change of the user's active posture is zero.
  • the yaw angle measured by the headset 100 is also zero.
  • the head-mounted device 100 displays the image in the marked frame 302 as shown in FIG. 4A.
  • the vehicle changes its direction with a yaw angle of ⁇ , that is, the yaw angle at which the posture of the user's head passively changes is ⁇ .
  • the yaw angle measured by the head-mounted device 100 is also ⁇ . If the prior art is adopted, the image obtained by rendering according to the yaw angle measured by the head-mounted device 100 and the three-dimensional scene data, such as the image in the marked box 303 in FIG. 4B.
  • the user does not actively rotate the head, that is, the user does not want to change the image of the head-mounted device 100, that is, what the user wants to see is still the mark box 302 shown in FIG. 4A. image.
  • the user is given a feeling that the image displayed by the head-mounted device 100 is offset, that is, the image that the user wants to see is located on the left side of the display screen of the head-mounted device 100.
  • the yaw angle of the user actively rotating the head is ⁇ , that is, the image corresponding to the yaw angle ⁇ in the image 301 that the user wants to see, that is, what the user wants to see is as shown in FIG. 6B Mark the image in box 602.
  • the image corresponding to the yaw angle ⁇ in the image 301 that the user wants to see
  • FIG. 6B Mark the image in box 602.
  • the embodiment of the present application proposes to offset the posture data of the traffic tool from the posture data measured by the head-mounted device 100 to obtain the posture data of the active change of the user's head. Then, the image that the user wants to see is obtained by rendering according to the actively changed posture data of the user's head and the original three-dimensional scene data.
  • the electronic device 200 in a relatively static state in the vehicle can be used, that is, the posture of the electronic device 200 will not change as the user actively changes the head posture.
  • the measured posture data of the electronic device 200 is equivalent to the running posture of the vehicle, and also equivalent to the posture data of the passive change of the user's head posture, which is the first posture data.
  • the “offset” in the posture data of the electronic device 200 is offset from the posture data of the head-mounted device 100 here, and it can also be expressed as “subtract”, “compensation”, “removal”, and “offset”. Words such as “filter out” and “deduct”.
  • the electronic device 200 and the head-mounted device 100 communicate with each other through a wired connection or a wireless connection (path 12 as shown in FIG. 1).
  • the path 12 may adopt, for example, Bluetooth (BT), wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Zigbee, and frequency modulation (frequency modulation). , FM), near field communication (NFC), infrared technology (infrared, IR), or general 2.4G/5G wireless communication technology, etc.
  • the head-mounted device 100 may be worn on the user's head in the manner of a helmet, or may be worn on the user's eyes in the manner of glasses.
  • the specific form of the head-mounted device 100 is not made in the embodiment of the present application. limited.
  • Fig. 2 shows a schematic structural diagram of the head-mounted device 100.
  • the head-mounted device 100 may include a processor 110, an external memory interface 120, an internal memory 150, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, and a wireless communication module 160, audio module 170, speaker 170A, microphone 170C, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194 and so on.
  • the sensor module 180 may include a gyroscope sensor 180A, a magnetic sensor 180B, and an acceleration sensor 180C.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the head-mounted device 100.
  • the head-mounted device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, and a graphics processing unit (GPU).
  • AP application processor
  • GPU graphics processing unit
  • Image signal processor image signal processor, ISP
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate an operation control signal according to the instruction operation code and the timing signal, and complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the internal memory 150 may be used to store computer executable program code, the executable program code including instructions.
  • the internal memory 150 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the head-mounted device 100.
  • the internal memory 150 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the head-mounted device 100 by running instructions stored in the internal memory 150 and/or instructions stored in a memory provided in the processor.
  • the head-mounted device 100 may adopt an integrated design, that is, the processor 110 and the internal memory 150 included in the head-mounted device 100 are used to implement related data. Processing work. For example, according to the posture data of the head-mounted device 100 (that is, the second posture data) acquired by the sensor module 180 and the received posture data (that is, the first posture data) of the electronic device 200, the head position of the user is calculated. The actively changed posture data (that is, the third posture data) is then rendered to be displayed based on the third posture data and the stored three-dimensional scene data, and displayed on the display screen 194 of the head-mounted device 100.
  • the posture data of the head-mounted device 100 that is, the second posture data
  • the received posture data that is, the first posture data
  • the head position of the user is calculated
  • the actively changed posture data that is, the third posture data
  • the head-mounted device 100 is then rendered to be displayed based on the third posture data and the stored three-dimensional scene data, and displayed on the display screen 194 of the head-mounted device 100.
  • the internal memory 150 of the head-mounted device 100 stores three-dimensional scene data of related applications.
  • the three-dimensional scene data includes three-dimensional VR video data.
  • the three-dimensional scene data includes three-dimensional game data (character data, scene data, basic terrain data, etc.).
  • the three-dimensional scene data of the related application may also be stored in the external memory through the external memory interface 120.
  • the head-mounted device 100 may adopt a split design, that is, the head-mounted device 100 is used to obtain its own second posture data and display one or more rendered ones. image. All or part of the related data processing work is handed over to other devices (such as the electronic device 200) for processing.
  • the head-mounted device 100 may send the acquired second posture data to the electronic device 200, and the electronic device 200 can use the second posture data and the first posture data acquired by itself, that is, the posture data of the electronic device 200, The third posture data is calculated. Then, the electronic device 200 renders the image to be displayed and the like according to the third posture data and the stored three-dimensional scene data.
  • the electronic device 200 sends the image to be displayed to the head-mounted device 100, and the head-mounted device 100 displays the image.
  • the head-mounted device 100 may not include the processor 110 and/or the internal memory 150.
  • the head-mounted device 100 may send the acquired second posture data to another device A, which is not the electronic device 200.
  • the electronic device 200 also sends the acquired first posture data to the device A, and the device A calculates the third posture data according to the received first posture data and the second posture data. Then, the device A renders the image to be displayed according to the third posture data and the stored three-dimensional scene data, and then sends the image to be displayed to the head-mounted device 100 for display by the head-mounted device 100.
  • the three-dimensional scene data of the related application is stored on the device that performs the image rendering work, that is, the electronic device 200 or the device A.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the head-mounted device 100, and can also be used to transfer data between the head-mounted device 100 and a peripheral device (for example, the electronic device 200). It can also be used to connect earphones and play audio through earphones.
  • the USB interface 130 may be specifically used to connect the electronic device 200 or the device A for sending the second posture data and receiving the rendered image to be displayed.
  • the USB interface 130 may also be specifically used to connect a handle for receiving user operations.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic illustration, and does not constitute a structural limitation of the head-mounted device 100.
  • the head-mounted device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the head-mounted device 100. While the charging management module 140 charges the battery 142, it can also supply power to the mobile terminal through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 150, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication module 160 can provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions .
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via an antenna, modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna.
  • the head-mounted device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the head-mounted device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the head-mounted device 100 includes two display screens, one of which is located in front of the left eye for viewing by the left eye.
  • the other display is located in front of the right eye for viewing by the right eye.
  • the images displayed on the two display screens have a certain viewing angle difference, which is consistent with the viewing angle difference of the human eyes, so that the user can see the three-dimensional effect.
  • the method provided in the embodiments of the present application can be used to correct the acquired posture data of the head-mounted device 100, that is, to offset
  • the passively changed posture data falling on the user's head is rendered image based on the corrected posture data and displayed on the corresponding display screen.
  • the head-mounted device 100 may implement a shooting function through an ISP, the camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the head-mounted device 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals, and can process other digital signals in addition to digital image signals. For example, when the head-mounted device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • the video codec is used to compress or decompress digital video.
  • the head-mounted device 100 may support one or more of the video codecs. In this way, the head-mounted device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the input information can be quickly processed, and it can also continuously self-learn.
  • applications such as intelligent cognition of the head-mounted device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the head-mounted device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize a data storage function. For example, save music, video and other files in an external memory card.
  • the head-mounted device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert audio electrical signals into sound signals.
  • the head-mounted device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input a sound signal to the microphone 170C.
  • the head-mounted device 100 may be provided with at least one of the microphones 170C. In other embodiments, the head-mounted device 100 may be provided with two microphones 170C, in addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the head-mounted device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the head-mounted device 100 may receive key input, and generate key signal input related to user settings and function control of the head-mounted device 100.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, and may also be used to indicate messages, missed calls, notifications, and so on.
  • the electronic device 200 of the embodiment of the present application may be a mobile phone, a tablet computer, a personal computer (PC), a personal digital assistant (PDA), a smart watch, a netbook, a wearable electronic device, etc.
  • PC personal computer
  • PDA personal digital assistant
  • This application does not impose special restrictions on the specific form of the electronic device.
  • the electronic device 200 may include a processor, an external memory interface, an internal memory, a USB interface 130, a charging management module, a power management module, a battery, a mobile communication module, a wireless communication module, an audio module, a speaker, a receiver, a microphone, a headphone interface, and a sensor Modules, buttons, motors, indicators, cameras, display screens, and subscriber identification module (SIM) card interfaces, etc.
  • a processor an external memory interface
  • an internal memory a USB interface 130
  • a charging management module a power management module
  • a battery a mobile communication module
  • a wireless communication module an audio module
  • speaker a speaker
  • a receiver a microphone
  • headphone interface a headphone interface
  • sensor Modules buttons, motors, indicators, cameras, display screens, and subscriber identification module (SIM) card interfaces, etc.
  • SIM subscriber identification module
  • the sensor module may include a motion sensor for measuring the posture data of the electronic device 200, that is, the second posture data.
  • motion sensors include, for example, gyroscope sensors, air pressure sensors, and magnetic sensors.
  • the electronic device 200 when the user is riding in a vehicle, the electronic device 200 is placed in a relatively stable position, for example, placed on the desktop of the vehicle, in a pocket behind the seat, or on other brackets capable of fixing the electronic device 200, etc. .
  • the second posture data acquired by the electronic device 200 is equivalent to the amount of posture change caused to the head-mounted device 100 when the vehicle is running.
  • the electronic device 200 may also include other sensors, such as an acceleration sensor, a distance sensor, a pressure sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and so on.
  • sensors such as an acceleration sensor, a distance sensor, a pressure sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and so on.
  • the mobile communication module can provide wireless communication solutions including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module can receive electromagnetic waves by the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves to radiate through the antenna.
  • at least part of the functional modules of the mobile communication module may be provided in the processor.
  • at least part of the functional modules of the mobile communication module and at least part of the modules of the processor may be provided in the same device.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the head-mounted device 100 adopts a split design.
  • the head-mounted device 100 (such as VR glasses, VR helmet, etc.) needs to be used in conjunction with an electronic device 200 (such as a mobile phone, a computer, etc.).
  • the head-mounted device 100 obtains its own posture data, that is, the second posture data, and sends the second posture data to the electronic device 200.
  • the electronic device 200 obtains its own posture data, that is, the first posture data, and is responsible for data processing.
  • the third posture data is calculated according to the second posture data and the first posture data, and the image to be displayed by the head mounted device 100 is rendered according to the third posture data and the three-dimensional scene data.
  • the following uses VR glasses as the head-mounted device 100 and a mobile phone as the electronic device 200 as an example for description.
  • the VR glasses establish a communication connection with the mobile phone, such as a wired connection through a data cable, or a Wi-Fi method to establish a wireless connection.
  • the VR glasses may display a user interface 700 as shown in FIG. 7A on the display screen.
  • the user can see the corresponding interface from the display screen of the VR glasses by wearing the VR glasses.
  • the user interface 700 may include a status bar 701, one or more recommended application icons 702, and one or more quick start application icons 703.
  • the status bar 701 can display information such as the battery level of the mobile phone, time, handle connection or disconnection status, and network status.
  • the recommended application icon 702 is an icon corresponding to the recommended latest and most popular VR application, for example: "Bomb Disposal Squad” game application icon, "Mercenary” game application icon, "Beautiful Planet” movie icon, etc. Users can also view more recommended application icons by scrolling left and right.
  • the quick-start application icon 703 is an icon of a resident application that is default or set by the user, and the user can quickly start the corresponding application through the icon, such as application market, VR mobile phone screen projection, content library, settings, etc.
  • the user interface 700 may also include icons of recently used applications and the like. The embodiment of the present application does not specifically limit the content of the user interface 700. In one example, after the communication connection between the VR glasses and the mobile phone is established, the mobile phone can shut down the screen.
  • a cursor may also be displayed on the interface displayed by the VR glasses, and the user can use a mobile phone or a handle connected with the mobile phone to control the display interface of the VR glasses in combination with the position of the cursor. It is similar to the control of the computer display interface by the user using the mouse.
  • FIG. 7B it is a schematic structural diagram of a handle 300 provided by an embodiment of this application.
  • the handle 300 includes a touchpad 301, a return button, an OK button, and a volume button. Among them, the user can implement operations such as moving the cursor, clicking the cursor, double-clicking the cursor, and dragging the cursor through the touchpad 301.
  • the cursor may not be displayed in the interface displayed by the VR glasses, and the user can use the mobile phone or the handle connected to the mobile phone to perform up, down, left, and backward operations to switch between the VR glasses display interface The selected icon or control.
  • the remote control controls the options in the TV menu.
  • the embodiment of the present application does not limit the control method of the VR glasses.
  • the user can use the handle, move the cursor to the position of the "settings” application icon, and click the touchpad 301 or press the OK button to select the "settings” application icon.
  • the "Settings” application icon can prompt the user that the application icon has been selected by changing its own color or thickening the border or using animations.
  • the VR glasses display a setting interface 704 as shown in FIG. 7C.
  • the setting interface 704 may include functional controls such as travel mode, screen size, and theater lights.
  • the user can select the travel mode control and choose to turn on the travel mode, for example, an interface 705 as shown in FIG. 7D is displayed.
  • a prompt message 706 may be displayed in the interface 705 to prompt the user to keep the mobile phone in a stable state.
  • the mobile phone may be placed on a small table or a pocket behind a front seat. It can be understood that when the mobile phone travel mode is turned on, the posture data of the VR glasses needs to be corrected according to the posture data of the mobile phone. The three-dimensional scene data is then rendered according to the corrected posture data to obtain the image displayed by the VR glasses, so as to avoid the problem of image offset mentioned in the background art.
  • the travel mode control may also be displayed on other interfaces, for example, displayed on the user interface 700, so that the user can quickly turn on the travel mode.
  • VR glasses can also prompt users by playing voice. Or, display the corresponding prompt information through the mobile phone or play the voice prompt. Or, when it is detected that the mobile phone is in an unstable state, the user is prompted to maintain the stable state of the mobile phone.
  • the embodiment of the present application does not limit the method of prompting and the timing of prompting.
  • the travel mode can also be automatically turned on.
  • the acceleration obtained by the acceleration sensor of the VR glasses or mobile phone is greater than a preset value (for example: 15 kilometers/hour), or the mobile phone obtains the user's position and the moving speed is greater than the preset value.
  • the mobile phone can automatically turn on the travel mode according to the user's travel information.
  • the travel information may be train ticket information, plane ticket information, schedule information, and so on. The embodiments of this application do not specifically limit the conditions for automatically turning on the travel mode.
  • the mobile phone After the travel mode is turned on, the mobile phone is placed in a position that can maintain a stable state, that is, keep it in a relatively static state with the vehicle. For example, it is placed on the desktop of a vehicle, or in a pocket behind the seat, or on other brackets that can fix the electronic device 200.
  • the posture of the mobile phone acquired by the mobile phone can be considered equivalent to the running posture of the vehicle.
  • a schematic flow chart of an image rendering method provided by an embodiment of this application includes step S801 to step S806, which are specifically as follows:
  • the mobile phone obtains first posture data of the mobile phone; the VR glasses obtains second posture data of the VR glasses.
  • the first posture data may be used to characterize the current posture of the mobile phone.
  • the first posture data may include the Euler angle of the mobile phone.
  • the Euler angle may include a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll).
  • the first posture data may also be other data that can be used to characterize the current posture of the mobile phone, such as a quaternion, a rotation matrix, etc., which are not limited in the embodiments of the present application.
  • Euler angles, quaternions and rotation matrices can be converted mutually. The conversion methods between several parameters are given below.
  • the yaw angle in the Euler angle is yaw
  • the pitch angle is pitch
  • the roll angle is roll.
  • c1 cos yaw
  • s1 sin yaw
  • c2 cos pitch
  • s2 sin pitch
  • c3 cos roll
  • s3 sin roll.
  • the rotation matrix M is converted to a quaternion.
  • the mobile phone is equipped with a motion sensor, which can be used to obtain the posture data of the mobile phone.
  • the motion sensor may include, for example, a gyroscope, an accelerometer, and a magnetometer.
  • the angular velocity of the mobile phone can be obtained through the gyroscope
  • the acceleration of the mobile phone can be obtained through the accelerometer
  • the geomagnetic field strength can be obtained through the magnetometer.
  • the Euler angle and/or quaternion of the mobile phone can be calculated according to the angular velocity of the mobile phone, and the obtained Euler angle and/or quaternion of the mobile phone can be corrected using the acceleration and geomagnetic field strength of the mobile phone to obtain the accuracy Higher Euler angles and/or quaternions.
  • the first attitude data may be unprocessed sensor data, such as angular velocity, acceleration, and geomagnetic field strength. It may also be the Euler angle and/or quaternion obtained after calculation based on the acquired sensor data.
  • the first posture data may also include unprocessed sensor data, as well as calculated Euler angles and/or quaternions, etc., which are not limited in the embodiment of the present application.
  • the embodiment of the present application may also select corresponding posture data according to actual scenarios. It is understandable that when a user uses VR glasses, he usually changes the screen displayed by the VR glasses by turning his head left and right or moving his head up and down. Then, the attitude data can only consider the yaw angle and pitch angle. For another example, if the VR glasses are set to only change the displayed image according to the user turning the head left and right, that is, when the user moves the head up and down without changing the image displayed by the VR glasses, then the posture data can also include only the yaw angle, The embodiment of the application does not limit this. In the following, the first attitude data is the Euler angle and/or the quaternion obtained after calculation as an example for description.
  • the second posture data can be used to characterize the current posture of the head-mounted device.
  • the second posture data may include Euler angles of the head-mounted device.
  • Euler angle includes pitch angle (pitch), yaw angle (yaw) and roll angle (roll).
  • the first posture data may also be other data that can be used to characterize the current posture of the head-mounted device, such as a quaternion, a rotation matrix, etc., which is not limited in the embodiment of the present application. It should be noted that Euler angles, quaternions and rotation matrices can be converted mutually.
  • VR glasses are also equipped with motion sensors, which can be used to obtain posture data of the VR glasses.
  • the motion sensor may include, for example, a gyroscope, an accelerometer, and a magnetometer.
  • the second posture data may be unprocessed sensor data of the VR glasses. After the subsequent mobile phone receives the sensor data, the Euler angle and/or quaternion of the VR glasses can be calculated.
  • the second posture data may also be Euler angles and/or quaternions calculated according to the acquired sensor data.
  • the second attitude data may also include unprocessed sensor data, but also calculated Euler angles and/or quaternions, etc., which are not specifically limited in the embodiment of the present application.
  • Euler angle is the rotation angle of VR glasses around the coordinate axis.
  • the related description of the posture data of the mobile phone please refer to the related description of the posture data of the mobile phone, which will not be repeated here.
  • the VR glasses send the acquired second posture data to the mobile phone.
  • the VR glasses can send the second posture data acquired by themselves to the mobile phone through a communication connection between the VR glasses and the mobile phone, such as a wired connection, or a wireless connection such as a WIFI connection.
  • a communication connection between the VR glasses and the mobile phone such as a wired connection, or a wireless connection such as a WIFI connection.
  • the mobile phone obtains third posture data according to the first posture data and the second posture data.
  • the first posture data is equivalent to the running posture of the vehicle, and also equivalent to the posture data of the passive change of the user's head.
  • the second posture data is the posture data of the VR glasses, including the posture data of the active change and the posture data of the passive change of the user's head. Since the last image presented by the VR glasses should be consistent with the actively changed posture data of the user's head, the mobile phone needs to offset the first posture data from the second posture data to obtain the third posture data.
  • the second Euler angles in the second attitude data can be subtracted from the first Euler angles in the first attitude data to obtain the third Euler angles, that is, the first Euler angles.
  • Three posture data if the attitude data is Euler angles, the second Euler angles in the second attitude data can be subtracted from the first Euler angles in the first attitude data to obtain the third Euler angles, that is, the first Euler angles.
  • the first attitude data measured by the mobile phone the yaw angle is ⁇
  • the second attitude data measured by the VR glasses the yaw angle is ⁇
  • the situation shown in FIG. 5C is taken as an example for description.
  • the first attitude data measured by the mobile phone the yaw angle is ⁇
  • the second attitude data measured by the VR glasses the yaw angle is ⁇ + ⁇ .
  • the posture data is a quaternion
  • the quaternion in the second posture data can be multiplied by the quaternion in the first posture data to obtain the first posture data.
  • the quaternion of the three-position data if the posture data is a rotation matrix, the third posture data is calculated according to the calculation principle of the matrix.
  • the specific calculation process can refer to related calculation principles in the prior art, which will not be described one by one here.
  • the third posture data may include any one or any combination of Euler angles, quaternions, and rotation matrices. And the Euler angle, quaternion and rotation matrix in the third posture data can be mutually converted.
  • the mobile phone processes the three-dimensional scene data according to the third posture data to obtain the first image.
  • the three-dimensional scene data can be understood as a parameter set used to describe the model of the three-dimensional scene of the related application.
  • the three-dimensional scene data includes three-dimensional VR video data.
  • the three-dimensional scene data includes three-dimensional game data.
  • the three-dimensional game data includes character data, scene data, and basic terrain data.
  • the mobile phone projects the three-dimensional scene according to the three-dimensional scene data to obtain a two-dimensional image, that is, an original image, such as the image 301 shown in FIG. 4A.
  • Both the Euler angle and the rotation matrix in the third posture data can be understood as the angle of the user's active head rotation.
  • all pixels in the original image are rotated, and the image composed of new pixels is the first image.
  • the original image is processed with the third posture data.
  • the image displayed by the VR glasses obtained by the method described in this application is the image in the marked frame 801 in FIG. 8B.
  • the posture angle in the second posture data is ⁇
  • the rotation matrix is M2
  • the new pixel The corresponding color also changes, and the first image composed of new pixels is shifted by ⁇ degrees compared to the original image.
  • the angle of the user's active head rotation is 0, the two do not match, so the user feels the picture drifts.
  • the image displayed by the VR glasses obtained by using the prior art is the image in the marked frame 802 in FIG. 8C.
  • the technology described in this application is used to process the original image with the third posture data.
  • the posture angle in the third posture data is ⁇
  • the first image composed of the new pixel is offset by ⁇ degrees from the original image, but because the user actively rotates the head The angle of the part is ⁇ , and the two match, so the user will not feel the picture drifting in the VR glasses.
  • the obtained image displayed by the VR glasses is the image in the marked frame 803 in FIG. 8D.
  • the posture angle in the second posture data is ⁇ + ⁇
  • the rotation matrix is M4
  • the color corresponding to the new pixel also changes.
  • the first image composed of the new pixel is offset by ⁇ + ⁇ degrees compared to the original image, but because the user actively rotates the head at an angle of ⁇ , the two do not match, so the user feels the picture drifts. .
  • the image displayed by the VR glasses obtained by using the prior art is the image in the marked frame 804 in FIG. 8E.
  • the mobile phone sends the first image to the VR glasses.
  • the VR glasses display the first image.
  • the first image displayed by the VR glasses at this time is obtained based on the deflection of the VR glasses posture caused by the user's active head rotation, and the influence of the running posture of the vehicle on the first image is filtered out.
  • the embodiment of the application obtains the posture data of the mobile phone in a stable state in the vehicle, which is equivalent to the running posture data of the vehicle, and offsets the running posture data of the vehicle from the posture data of the VR glasses, thereby Obtain the user's initiative to change the head posture data.
  • the last image displayed by the VR glasses is rendered according to the user's active change of the head posture data, which is consistent with the user's intention, avoids the feeling of image offset to the user, and enhances the user's VR experience.
  • the user may accidentally pull the phone, or the vehicle may suddenly change direction by a large margin, which may cause a large deflection of the posture data of the phone and cause the display of the VR glasses.
  • the picture also undergoes a large deflection, which affects the user experience.
  • it can be determined whether the mobile phone is deflected by a large amount according to the first posture data. If a large deflection occurs, it is considered that the mobile phone may be accidentally pulled or the vehicle has undergone a large change of direction, and the first attitude data cannot be used. That is, the first posture data cannot be offset from the second posture data.
  • the first posture data can be used to adjust the second posture data to obtain the third posture data.
  • the first attitude data may include the angular velocity of the mobile phone. Then, judging whether the mobile phone is deflected by a large amount can be determined by judging whether the angular velocity of the mobile phone is greater than a threshold value 1 (for example, 50 degrees/sec). If the angular velocity of the mobile phone is greater than the threshold 1, it is considered that the mobile phone has a large deflection. If the angular velocity of the mobile phone is not greater than the threshold 1, it is considered that the mobile phone has not deflected significantly.
  • a threshold value 1 for example, 50 degrees/sec
  • the first posture data is used to determine whether the angular velocity of the mobile phone is overspeed.
  • the first posture data is offset from the second posture data, which can prevent the screen of the VR glasses from being greatly deflected when the mobile phone deflects greatly, and further enhance the user's VR experience.
  • the mobile phone After the mobile phone receives the second posture data of the VR glasses, it can also be determined first whether the VR glasses are in a stable state, that is, the VR glasses have not deflected or the deflections are small, which can be ignored. If the VR glasses are in a stable state, the mobile phone is basically in a stable state, then the value of the first posture data is zero or small, and the impact on the second posture data is not large, so you don’t need to offset it from the second posture data The first posture data. If the VR glasses are in an unstable state, then the mobile phone is basically in an unstable state. The second posture data of the mobile phone has a larger value and has a greater impact on the second posture data. Then, it needs to be offset from the second posture data. The first posture data.
  • the second posture data includes acceleration and angular velocity. Then it can be judged whether the VR glasses are in a stable state by judging whether the acceleration of the VR glasses is less than or equal to the threshold 2 (for example, 0.5 m/s), and the angular velocity of the VR glasses is less than or equal to the threshold 3 (for example, 8 degrees/ second). If the acceleration of the VR glasses is less than or equal to the threshold 2 (for example, 0.5 m/sec), and the angular velocity of the VR glasses is less than or equal to the threshold 3 (for example, 8 degrees/sec), the VR glasses are considered to be in a stable state. If the acceleration of the VR glasses is greater than the threshold 2, or the angular velocity of the VR glasses is greater than the threshold 3, the VR glasses are considered to be in an unstable state.
  • the threshold 2 for example, 0.5 m/s
  • the angular velocity of the VR glasses is less than or equal to the threshold 3 (for example, 8 degrees/sec
  • the user can first determine whether the VR glasses are in a stable state. If the VR glasses are in an unstable state, the posture data of the user's active head rotation is calculated based on the posture data of the VR glasses and the posture data of the mobile phone, and the first image is obtained by rendering with the three-dimensional scene data. If the VR glasses are in a stable state, you can directly use the posture data of the VR glasses and render the first image with the three-dimensional scene data. In other words, when the VR glasses are in a stable state, the calculation amount of the mobile phone can be reduced, and the processing rate of the mobile phone can be increased.
  • the head-mounted device 100 adopts an integrated design.
  • the head-mounted device 100 (such as VR glasses, VR helmet, etc.) needs to be used in conjunction with an electronic device 200 (such as a mobile phone, a computer, etc.).
  • the electronic device 200 is used to obtain its own posture data, that is, the first posture data, and send the first posture data to the head-mounted device 100.
  • the head-mounted device 100 obtains its own posture data, that is, the second posture data, and is responsible for data processing.
  • the third posture data is calculated according to the second posture data and the first posture data
  • the image to be displayed is rendered according to the third posture data and the three-dimensional scene data, and the image is displayed.
  • VR glasses are still used as the head-mounted device 100 and the mobile phone is used as the electronic device 200 as an example.
  • the mobile phone posture acquired by the mobile phone can be considered equivalent to the running posture of the vehicle.
  • a schematic flow chart of another image rendering method provided by an embodiment of this application is specifically as follows:
  • S901 The mobile phone obtains first posture data, and the VR glasses obtains second posture data.
  • S902 The first posture data of the mobile phone to the VR glasses mode.
  • the VR glasses obtain third posture data according to the first posture data and the second posture data.
  • the VR glasses process the three-dimensional scene data according to the obtained third posture data to obtain the first image.
  • the data of the three-dimensional scene is stored in the VR glasses.
  • the VR glasses process the calculated third posture data and the three-dimensional scene data to obtain the first image displayed by the VR glasses.
  • the VR glasses display the first image.
  • the chip system includes at least one processor 1401 and at least one interface circuit 1402.
  • the processor 1401 and the interface circuit 1402 may be interconnected by wires.
  • the interface circuit 1402 can be used to receive signals from other devices (e.g., memory).
  • the interface circuit 1402 may be used to send signals to other devices (such as the processor 1401).
  • the interface circuit 1402 may read instructions stored in the memory, and send the instructions to the processor 1401.
  • the apparatus can be made to execute the steps executed by the electronic device 200 (for example, a mobile phone) in the foregoing embodiment.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • the chip system includes at least one processor 1501 and at least one interface circuit 1502.
  • the processor 1501 and the interface circuit 1502 may be interconnected by wires.
  • the interface circuit 1502 may be used to receive signals from other devices (e.g., memory).
  • the interface circuit 1502 may be used to send signals to other devices (such as the processor 1501).
  • the interface circuit 1502 can read instructions stored in the memory, and send the instructions to the processor 1501.
  • the apparatus may be caused to execute the various steps executed by the head-mounted device 100 (for example, VR glasses) in the above-mentioned embodiment.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • An embodiment of the present application also provides a device included in an electronic device, and the device has a function of realizing the behavior of the electronic device in any of the methods in the foregoing embodiments.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions.
  • a sensor module or unit a communication module or unit, and a processing module or unit.
  • the device can be a chip system.
  • An embodiment of the present application also provides a device included in a head-mounted device, and the device has the function of implementing the head-mounted device in any of the methods in the foregoing embodiments.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, a sensor module or unit, a communication module or unit, and a processing module or unit.
  • the device can be a chip system.
  • An embodiment of the present application also provides a computer-readable storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the method described in any one of the possible implementation manners in the foregoing embodiments.
  • the embodiments of the present application also provide a computer-readable storage medium, including computer instructions, which when the computer instructions run on the head-mounted device, cause the head-mounted device to execute as described in any of the possible implementation manners in the foregoing embodiments. The method described.
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in any one of the possible implementation manners in the foregoing embodiments.
  • the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the embodiments of the present invention.
  • the embodiment of the present application may divide the above-mentioned terminal and the like into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiment of the present invention is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage
  • the medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un procédé de rendu d'image, un dispositif électronique, et un système, qui se rapportent au domaine technique du traitement d'image et peuvent réduire le décalage d'une image affichée par un dispositif monté sur la tête, ledit décalage étant provoqué par la posture de déplacement d'un moyen de transport, de façon à améliorer l'expérience d'utilisation du dispositif monté sur la tête. Le procédé comprend spécifiquement les étapes suivantes : un dispositif monté sur la tête est connecté à un dispositif électronique, et le dispositif électronique acquiert des données de posture du dispositif électronique; le dispositif monté sur la tête acquiert des données de posture du dispositif monté sur la tête et envoie les données de posture au dispositif électronique; le dispositif électronique calcule les données de posture du dispositif électronique et les données de posture du dispositif monté sur la tête pour obtenir des données de posture d'un utilisateur lors de la rotation active de la tête, et utilise les données de posture de l'utilisateur lors de la rotation active de la tête et des données de scénario en trois dimensions pour un traitement afin d'obtenir une image à afficher; et le dispositif monté sur la tête affiche l'image.
PCT/CN2020/127599 2020-01-20 2020-11-09 Procédé de rendu d'image, dispositif électronique, et système WO2021147465A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010066613.0A CN113223129B (zh) 2020-01-20 2020-01-20 一种图像渲染方法、电子设备及系统
CN202010066613.0 2020-01-20

Publications (1)

Publication Number Publication Date
WO2021147465A1 true WO2021147465A1 (fr) 2021-07-29

Family

ID=76992824

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127599 WO2021147465A1 (fr) 2020-01-20 2020-11-09 Procédé de rendu d'image, dispositif électronique, et système

Country Status (2)

Country Link
CN (1) CN113223129B (fr)
WO (1) WO2021147465A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461072A (zh) * 2022-02-10 2022-05-10 湖北星纪时代科技有限公司 一种显示方法、装置、电子设备和存储介质
CN117008711A (zh) * 2022-04-29 2023-11-07 华为技术有限公司 确定头部姿态的方法以及装置
CN115988247B (zh) * 2022-12-08 2023-10-20 小象智能(深圳)有限公司 一种xr车载观影系统及方法
CN116204068B (zh) * 2023-05-06 2023-08-04 蔚来汽车科技(安徽)有限公司 增强现实显示设备及其显示方法、车辆、移动终端、介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850967A (zh) * 2016-12-29 2017-06-13 深圳市宇恒互动科技开发有限公司 一种自适应屏显方法、系统及头戴设备
CN107820593A (zh) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 一种虚拟现实交互方法、装置及系统
US20190174016A1 (en) * 2017-12-06 2019-06-06 Fuji Xerox Co., Ltd. Display apparatus
CN109992111A (zh) * 2019-03-25 2019-07-09 联想(北京)有限公司 增强现实扩展方法和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850967A (zh) * 2016-12-29 2017-06-13 深圳市宇恒互动科技开发有限公司 一种自适应屏显方法、系统及头戴设备
CN107820593A (zh) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 一种虚拟现实交互方法、装置及系统
US20190174016A1 (en) * 2017-12-06 2019-06-06 Fuji Xerox Co., Ltd. Display apparatus
CN109992111A (zh) * 2019-03-25 2019-07-09 联想(北京)有限公司 增强现实扩展方法和电子设备

Also Published As

Publication number Publication date
CN113223129B (zh) 2024-03-26
CN113223129A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2021147465A1 (fr) Procédé de rendu d'image, dispositif électronique, et système
CN109917956B (zh) 一种控制屏幕显示的方法和电子设备
JP6844542B2 (ja) 情報処理装置、情報処理方法及びプログラム
US11288807B2 (en) Method, electronic device and storage medium for segmenting image
WO2020192458A1 (fr) Procédé d'affichage d'image et dispositif de visiocasque
WO2021017836A1 (fr) Procédé de commande d'affichage de dispositif à grand écran, terminal mobile et premier système
CN110427110B (zh) 一种直播方法、装置以及直播服务器
WO2021018070A1 (fr) Procédé d'affichage d'image et dispositif électronique
CN112533017B (zh) 直播方法、装置、终端及存储介质
CN112835445B (zh) 虚拟现实场景中的交互方法、装置及系统
WO2021164289A1 (fr) Procédé et appareil de traitement de portrait, et terminal
WO2021136266A1 (fr) Procédé de synchronisation d'image virtuelle et dispositif pouvant être porté
WO2021103990A1 (fr) Procédé d'affichage, dispositif électronique et système
WO2020237617A1 (fr) Procédé, dispositif et appareil de commande d'écran, et support de stockage
CN111479148B (zh) 可穿戴设备、眼镜终端、处理终端、数据交互方法与介质
EP3974950A1 (fr) Procédé et appareil interactifs dans une scène de réalité virtuelle
WO2022252924A1 (fr) Procédé de transmission et d'affichage d'image et dispositif et système associés
WO2022199102A1 (fr) Procédé et dispositif de traitement d'image
US20230409192A1 (en) Device Interaction Method, Electronic Device, and Interaction System
JP6969577B2 (ja) 情報処理装置、情報処理方法、及びプログラム
KR20180000009A (ko) 증강현실 생성 펜 및 이를 이용한 증강현실 제공 시스템
CN110971840B (zh) 视频贴图方法及装置、计算机设备及存储介质
WO2020044916A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US11768546B1 (en) Method and device for positional/rotational information of a finger-wearable device
CN116820229B (zh) Xr空间的显示方法、xr设备、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915642

Country of ref document: EP

Kind code of ref document: A1