CN113223129B - Image rendering method, electronic equipment and system - Google Patents

Image rendering method, electronic equipment and system Download PDF

Info

Publication number
CN113223129B
CN113223129B CN202010066613.0A CN202010066613A CN113223129B CN 113223129 B CN113223129 B CN 113223129B CN 202010066613 A CN202010066613 A CN 202010066613A CN 113223129 B CN113223129 B CN 113223129B
Authority
CN
China
Prior art keywords
data
gesture
head
gesture data
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010066613.0A
Other languages
Chinese (zh)
Other versions
CN113223129A (en
Inventor
付钟奇
沈钢
姚建江
朱应成
单双
赖武军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010066613.0A priority Critical patent/CN113223129B/en
Priority to PCT/CN2020/127599 priority patent/WO2021147465A1/en
Publication of CN113223129A publication Critical patent/CN113223129A/en
Application granted granted Critical
Publication of CN113223129B publication Critical patent/CN113223129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Abstract

An image rendering method, electronic equipment and a system relate to the technical field of image processing, and can reduce the deviation of images displayed by head-mounted equipment caused by the motion gesture of a vehicle, and improve the use experience of the head-mounted equipment, wherein the method specifically comprises the following steps: the head-mounted device is connected with the electronic device, the electronic device obtains gesture data of the electronic device, the head-mounted device obtains gesture data of the head-mounted device and sends the gesture data to the electronic device, the electronic device calculates to obtain gesture data of a head actively rotated by a user according to the gesture data of the electronic device and the gesture data of the head-mounted device, the gesture data of the head actively rotated by the user and the three-dimensional scene data are used for processing to obtain an image to be displayed, and the head-mounted device displays the image.

Description

Image rendering method, electronic equipment and system
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image rendering method, an electronic device, and a system.
Background
Virtual Reality (VR) refers to a human-computer interaction means by means of computer and sensor technology. The virtual reality is to generate a virtual world of a three-dimensional space by using computer simulation, and provide the simulation of senses such as vision, hearing, touch and the like for users, so that the users can observe things in the three-dimensional space in time without limitation as if the users are in the scene. Head-mounted devices based on VR technology have been widely used in a variety of fields of education, entertainment, medical treatment, design, and the like. Also, as portable head-mounted device technology matures, it is also true that users enjoy VR experiences while traveling using portable head-mounted devices.
In general, a head-mounted device is provided with a motion sensor, such as an accelerometer, a gyroscope, etc., to collect its own posture data, which is equivalent to the posture data of the head of a user, for rendering an image displayed by the head-mounted device, so as to simulate the phenomenon that the user's head moves to cause the change of the image seen by eyes in reality. However, in a travel scenario, such as when a user is riding a vehicle, the motion state of the vehicle itself may affect the pose data measured by the headset, causing the headset display image to shift. For example, when the user does not actively turn the head, i.e., the user's head is not moving relative to the headset, it is indicated that the user does not want to change the image displayed by the headset. However, if the vehicle changes direction during this process, the posture of the head-mounted device itself will be changed, and the displayed image will be changed. Therefore, the images displayed by the head-mounted device are changed, and the images are offset to the visual perception of the user against the real intention of the user, so that the immersion experience of the user using the head-mounted device is reduced, and dizziness of the user is easily caused.
Disclosure of Invention
According to the image rendering method, the electronic device and the system, the offset of the images displayed by the head-mounted device caused by the movement gesture of the vehicle can be reduced, and the use experience of the head-mounted device is improved.
In a first aspect, an image rendering method is provided, which is applied to a system including an electronic device and a head-mounted device, and the method includes: the electronic equipment acquires first gesture data of the electronic equipment; the head-mounted device acquires second gesture data of the head-mounted device and sends the second gesture data to the electronic device; the electronic device receives the second gesture data sent by the head-mounted device; the electronic equipment obtains third gesture data according to the first gesture data and the second gesture data, wherein the third gesture data is used for representing the head rotation gesture of a user; the electronic equipment processes the three-dimensional scene data according to the third gesture data to obtain a first image; the electronic device sends the first image to the head-mounted device; the head mounted device displays the first image.
Therefore, the gesture data of the mobile phone in a static state relative to the vehicle are obtained, the gesture data are equivalent to the operation gesture data of the vehicle, and the operation gesture data of the vehicle are offset from the gesture data of the head-mounted device, so that the gesture data of the head of the user is actively changed. And rendering the last displayed image of the head-mounted device according to the gesture data of the head actively changed by the user, keeping consistent with the intention of the user, avoiding the sense of image deviation of the user, and improving the VR experience of the user.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the electronic device obtains third gesture data according to the first gesture data and the second gesture data, including: if the electronic equipment judges that the first gesture data meets a first condition, the electronic equipment determines that the second gesture data is the third gesture data; and if the electronic equipment judges that the first posture data does not meet the first condition, the electronic equipment determines to offset the first posture data from the second posture data, and the third posture data is obtained.
Wherein the first condition is used to characterize whether the electronic device is deflected substantially. If the deflection occurs to a large extent, it is considered that the electronic device may be accidentally pulled or the vehicle changes direction to a large extent, and the first gesture data cannot be offset from the second gesture data. The second gesture data and the stored three-dimensional scene data are directly used for rendering to obtain the first image, so that the image displayed by the VR glasses is prevented from being deflected greatly. If the electronic equipment is determined not to deflect greatly, the electronic equipment is considered to be in a stable state, and the first gesture data can be used for adjusting the second gesture data to obtain third gesture data.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the electronic device obtains third gesture data according to the first gesture data and the second gesture data, and further includes: the electronic equipment judges whether the second gesture data meets a second condition or not; if the second gesture data meets the second condition, the electronic device determines that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, the electronic equipment determines to offset the first posture data from the second posture data, and obtains the third posture data.
Wherein the second condition is used to characterize whether the head mounted device is in a steady state, i.e. the head mounted device is not deflected or deflected less, and may be ignored. If the head-mounted device is in a stable state, the electronic device is basically in a stable state, the value of the first gesture data is zero or smaller, the influence on the second gesture data is not great, and the first gesture data can be canceled from the second gesture data. If the head-mounted device is in an unstable state, the electronic device is also in an unstable state basically, the value of the second gesture data of the electronic device is larger, the influence on the second gesture data is larger, and then the first gesture data needs to be counteracted from the second gesture data.
In other words, when the head-mounted device is in a stable state, the workload of calculating the third gesture data can be reduced, and the processing rate of the electronic device is improved.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining, by the electronic device, to cancel the first gesture data from the second gesture data, and obtaining the third gesture data includes: and the electronic equipment determines the Euler angle of the head-mounted equipment minus the Euler angle of the electronic equipment to obtain the third gesture data.
The embodiment of the application can also select corresponding gesture data according to an actual scene. It will be appreciated that the user typically changes the display of the head mounted device by turning the head left and right or moving the head up and down while using the head mounted device. Then, the attitude data may consider only the yaw angle and the pitch angle. For another example, if the head mounted device is configured to change the displayed image only in accordance with the user turning the head left and right, i.e., the user moving the head up and down does not change the image displayed by the head mounted device, the attitude data may also include only the yaw angle. The method is beneficial to simplifying the calculation process and improving the processing efficiency of the electronic equipment.
In a possible implementation manner, the electronic device processes three-dimensional scene data according to the third gesture data to obtain a first image, including: the electronic equipment obtains a second image according to the three-dimensional scene data; and the electronic equipment rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
A second aspect is an image rendering method, which is applied to an electronic device, and includes: the electronic equipment acquires first gesture data of the electronic equipment and receives second gesture data of the head-mounted equipment, which is sent by the head-mounted equipment; the electronic equipment obtains third gesture data according to the first gesture data and the second gesture data; the third gesture data is used for representing the head rotation gesture of the user; the electronic equipment processes the three-dimensional scene data according to the third gesture data to obtain a first image; the electronic device sends the first image to the head-mounted device.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the electronic device obtains third gesture data according to the first gesture data and the second gesture data, including: if the electronic equipment judges that the first gesture data meets a first condition, the electronic equipment determines that the second gesture data is the third gesture data; and if the electronic equipment judges that the first posture data does not meet the first condition, the electronic equipment determines to offset the first posture data from the second posture data, and the third posture data is obtained.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the electronic device obtains third gesture data according to the first gesture data and the second gesture data, and further includes: the electronic equipment judges whether the second gesture data meets a second condition or not; if the second gesture data meets the second condition, the electronic device determines that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, the electronic equipment determines to offset the first posture data from the second posture data, and obtains the third posture data.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining, by the electronic device, to cancel the first gesture data from the second gesture data, and obtaining the third gesture data includes: and the electronic equipment determines the Euler angle of the head-mounted equipment minus the Euler angle of the electronic equipment to obtain the third gesture data.
In a possible implementation manner, the electronic device processes three-dimensional scene data according to the third gesture data to obtain a first image, including: the electronic equipment obtains a second image according to the three-dimensional scene data; and the electronic equipment rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
In a third aspect, an image rendering method is provided, applied to a system including an electronic device and a head-mounted device, the method including: the electronic equipment acquires first gesture data of the electronic equipment, the first gesture data is sent to the head-mounted equipment, and the head-mounted equipment acquires second gesture data of the head-mounted equipment; the head-mounted device receives the first gesture data sent by the electronic device;
The head-mounted device obtains third gesture data according to the first gesture data and the second gesture data; the third gesture data is used for representing the head rotation gesture of the user; the head-mounted device processes the three-dimensional scene data according to the third gesture data to obtain a first image; the head mounted device displays the first image.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method includes: if the head-mounted device judges that the first gesture data meets a first condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the head-mounted device judges that the first posture data does not meet the first condition, the head-mounted device determines to offset the first posture data from the second posture data, and the third posture data is obtained.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method further includes: the head-mounted device judges whether the second gesture data meets a second condition; if the second gesture data meets the second condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, the head-mounted device determines to offset the first posture data from the second posture data, and obtains the third posture data.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining, by the head-mounted device, to cancel the first gesture data from the second gesture data, and obtaining the third gesture data includes: and the head-mounted device determines the Euler angle of the head-mounted device minus the Euler angle of the electronic device to obtain the third gesture data.
In a possible implementation manner, the headset device processes three-dimensional scene data according to the third gesture data to obtain a first image, including: the head-mounted device obtains a second image according to the three-dimensional scene data; and the head-mounted device rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
A fourth aspect is an image rendering method, which is applied to a head-mounted device, and includes: the head-mounted device acquires second posture data of the head-mounted device and receives first posture data of the electronic device, which are sent by the electronic device; the head-mounted device obtains third gesture data according to the first gesture data and the second gesture data; the third gesture data is used for representing the head rotation gesture of the user; the head-mounted device processes the three-dimensional scene data according to the third gesture data to obtain a first image; the head mounted device displays the first image.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method includes: if the head-mounted device judges that the first gesture data meets a first condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the head-mounted device judges that the first posture data does not meet the first condition, the head-mounted device determines to offset the first posture data from the second posture data, and the third posture data is obtained.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method further includes: the head-mounted device judges whether the second gesture data meets a second condition; if the second gesture data meets the second condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, the head-mounted device determines to offset the first posture data from the second posture data, and obtains the third posture data.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining, by the head-mounted device, to cancel the first gesture data from the second gesture data, and obtaining the third gesture data includes: and the head-mounted device determines the Euler angle of the head-mounted device minus the Euler angle of the electronic device to obtain the third gesture data.
In a possible implementation manner, the headset device processes three-dimensional scene data according to the third gesture data to obtain a first image, including: the head-mounted device obtains a second image according to the three-dimensional scene data; and the head-mounted device rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
A fifth aspect is an electronic device, comprising: a processor, a memory, a communication interface, the memory, the communication interface being coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to: acquiring first posture data of the electronic equipment, and receiving second posture data of the head-mounted equipment, which are sent by the head-mounted equipment; obtaining third posture data according to the first posture data and the second posture data; the third gesture data is used for representing the head rotation gesture of the user; processing the three-dimensional scene data according to the third gesture data to obtain at least one first image; the first image is sent to the head mounted device.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the obtaining third gesture data according to the first gesture data and the second gesture data includes: if the first gesture data is judged to meet the first condition, determining the second gesture data as the third gesture data; and if the first posture data does not meet the first condition, determining to offset the first posture data from the second posture data, and obtaining the third posture data.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the electronic device obtains third gesture data according to the first gesture data and the second gesture data, and further includes: judging whether the second gesture data meets a second condition; if the second gesture data meets the second condition, determining that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, determining to offset the first posture data from the second posture data, and obtaining the third posture data.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining to cancel the first gesture data from the second gesture data to obtain the third gesture data includes: and determining the Euler angle of the head-mounted device to subtract the Euler angle of the electronic device, so as to obtain the third gesture data.
In a possible implementation manner, the processing the three-dimensional scene data according to the third gesture data to obtain a first image includes: obtaining a second image according to the three-dimensional scene data; and rotating the second image according to the rotation matrix in the third gesture data to obtain the first image.
A sixth aspect is a head-mounted device, including: a processor, a memory, a communication interface, and a display screen, the memory, the communication interface, the display screen being coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to: acquiring second posture data of the head-mounted device, and receiving first posture data of the electronic device, which are sent by the electronic device; obtaining third posture data according to the first posture data and the second posture data; the third gesture data is used for representing the head rotation gesture of the user; processing the three-dimensional scene data according to the third gesture data to obtain a first image; and displaying the first image.
In a possible implementation, the first gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second gesture data includes at least one of an euler angle, a quaternion, and a rotation matrix of the head-mounted device.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method includes: if the head-mounted device judges that the first gesture data meets a first condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the head-mounted device judges that the first posture data does not meet the first condition, the head-mounted device determines to offset the first posture data from the second posture data, and the third posture data is obtained.
In a possible implementation, the first gesture data further includes an angular velocity of the electronic device, and the first condition includes the angular velocity of the electronic device being greater than a first threshold.
In a possible implementation manner, the headset device obtains third gesture data according to the first gesture data and the second gesture data, and the method further includes: the head-mounted device judges whether the second gesture data meets a second condition; if the second gesture data meets the second condition, the head-mounted device determines that the second gesture data is the third gesture data; and if the second posture data does not meet the second condition, the head-mounted device determines to offset the first posture data from the second posture data, and obtains the third posture data.
In a possible implementation, the second gesture data further includes an acceleration of the head mounted device and/or an angular velocity of the head mounted device, the second condition includes the acceleration of the head mounted device being less than or equal to a second threshold value, and/or the angular velocity of the head mounted device being less than or equal to a third threshold value.
In a possible implementation manner, the determining, by the head-mounted device, to cancel the first gesture data from the second gesture data, and obtaining the third gesture data includes: and the head-mounted device determines the Euler angle of the head-mounted device minus the Euler angle of the electronic device to obtain the third gesture data.
In a possible implementation manner, the headset device processes three-dimensional scene data according to the third gesture data to obtain a first image, including: the head-mounted device obtains a second image according to the three-dimensional scene data; and the head-mounted device rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
A seventh aspect provides an apparatus, the apparatus being included in an electronic device, the apparatus having functionality to implement the behaviour of the electronic device in any one of the above second aspect and possible implementation manners of the second aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a sensor module or unit, a communication module or unit, and a processing module or unit.
In one possible implementation, the apparatus may be a system-on-chip.
An eighth aspect provides an apparatus, the apparatus being included in a head mounted device, the apparatus having the functionality to implement the head mounted device in any one of the possible implementations of the fourth and fourth aspects. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a sensor module or unit, a communication module or unit, and a processing module or unit. The device may be a system-on-chip.
In one possible implementation, the apparatus may be a system-on-chip.
A ninth aspect provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described in any one of the possible implementations of the second aspect and the second aspect above.
A tenth aspect provides a computer readable storage medium comprising computer instructions which, when run on a head mounted device, cause the head mounted device to perform the method as described in any one of the possible implementations of the fourth and fourth aspects above.
An eleventh aspect provides a computer program product which, when run on a computer, causes the computer to perform the method as described in the second aspect and any of the possible implementations of the second aspect or to perform the method as described in the fourth aspect and any of the possible implementations of the fourth aspect.
A twelfth aspect provides a system on a chip comprising a processor which, when executing instructions, performs the method as described in the second aspect and any one of the possible implementations of the second aspect, or performs the method as described in the fourth aspect and any one of the possible implementations of the fourth aspect.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 3 is an illustrative schematic diagram of some of the gesture data provided by embodiments of the present application;
FIG. 4A is an explanatory diagram of a user's head pose and a displayed image of a head-mounted device according to an embodiment of the present application;
FIG. 4B is an illustrative diagram of another user head pose and headset display image provided in an embodiment of the present application;
FIG. 4C is an illustrative diagram of another user head pose and headset display image provided by embodiments of the present application;
FIG. 5A is a schematic illustration of a vehicle operating position and a user head position according to an embodiment of the present application;
FIG. 5B is a schematic illustration of another vehicle operating pose and user head pose according to an embodiment of the present application;
FIG. 5C is a schematic illustration of another vehicle operating pose and user head pose according to an embodiment of the present application;
FIG. 6A is an illustrative diagram providing some headset display images according to embodiments of the present application;
FIG. 6B is an illustrative diagram providing still other headset display images in accordance with embodiments of the present application;
FIG. 7A is a schematic diagram of a user interface of a headset according to an embodiment of the present application;
FIG. 7B is a schematic view of a handle according to an embodiment of the present disclosure;
FIG. 7C is a schematic diagram of a user interface of another head-mounted device provided in an embodiment of the present application;
FIG. 7D is a schematic diagram of a user interface of a further headset provided in an embodiment of the present application;
fig. 8A is a flowchart of an image rendering method according to an embodiment of the present application;
FIG. 8B is a schematic view of an image displayed using the method provided by the embodiments of the present application;
FIG. 8C is a schematic diagram of an image displayed using the prior art;
FIG. 8D is a schematic view of another image displayed by the method according to the embodiments of the present application;
FIG. 8E is a schematic diagram of another prior art display;
fig. 9 is a flowchart of another image rendering method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a chip system according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of yet another chip system according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Unless otherwise indicated, "/" means or, e.g., A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The method provided by the embodiment of the application can be applied to the head-mounted device. Exemplary, as shown in fig. 1, a schematic diagram of an application scenario provided in an embodiment of the present application is shown. The method provided by the embodiment of the application can be applied to a scene of watching video by using the head-mounted device 100 when a user takes a vehicle (such as an airplane, an automobile, a ship, a high-speed rail, a subway, a bicycle and the like).
In some embodiments of the present application, the head mounted device 100 is configured with motion sensors, such as accelerometers, gyroscopes, magnetometers, and the like, for measuring pose data of the head mounted device 100. And then the head-mounted device 100 processes the measured gesture data of the head-mounted device 100 and the three-dimensional scene data to obtain an image to be displayed by the head-mounted device 100. The three-dimensional scene data may be VR video, augmented Reality (Augmented Reality, AR) video, mixed Reality (MR) video, or the like, for example.
When the user is in a non-moving state, for example, in a scene where the user is not riding a vehicle, since the user wears the head mounted device 100, the posture data of the head mounted device 100 corresponds to the posture data of the user's head. Then, the image displayed by the head-mounted device 100 is rendered according to the posture data of the head-mounted device 100, that is, the image displayed by the head-mounted device 100 is rendered according to the posture data of the head of the user. Accordingly, the user can change the posture of the head-mounted device 100 by actively changing the posture of the head to change the image displayed by the head-mounted device 100.
In order to facilitate understanding of the technical solutions provided in the embodiments of the present application, description will be made of gesture data measured by the head-mounted device 100 and a process in which the head-mounted device 100 displays an image according to a gesture of a head of a user.
Illustratively, as shown in fig. 3, a schematic view of the headset 100 is worn by a user. Referring to the posture of the head-mounted device 100 when the user wears the head-mounted device 100, the coordinate system of the head-mounted device 100 may be set such that: in the gravity direction, the vertically upward direction is the positive direction of the Y axis. The direction from the left side to the right side of the user on the horizontal plane is the positive direction of the X axis. The positive direction of the Z axis is set based on the principle of a right-hand Cartesian coordinate system, namely, the direction pointing from the front of the user to the rear of the user on the horizontal plane is the positive direction of the Z axis. Here, the positive directions of the three coordinate axes are described with reference to the posture of the head-mounted device 100 when the user wears the display 100, but the set coordinate system does not change with the posture of the head-mounted device 100. Also, the triaxial and the positive direction of the triaxial set here are only examples, and the specific setting of the coordinate axes in the embodiment of the present application is not limited.
In this embodiment, with the set coordinate system, the euler angle may be used to characterize the change of the posture of the headset 100. In other words, the pose data of the headset 100 may include euler angles. Among them, euler Angle (Euler Angle), also called attitude Angle, includes pitch Angle (pitch), yaw Angle (yaw) and roll Angle (roll).
As shown in fig. 3, the pitch angle is the angle at which the headset 100 rotates clockwise about the X-axis. The yaw angle is the angle at which the headset 100 rotates clockwise about the Y-axis. Roll angle the angle by which the headset 100 rotates clockwise about the Z axis.
In connection with the actual scenario in which the user uses the headset 100, the user typically changes the image displayed by the headset 100 by turning the head left and right in a horizontal plane, or by moving the head up and down in a horizontal plane. Therefore, in connection with fig. 3 to 4C, the image change of the head-mounted apparatus 100 will be described taking an example in which the user rotates the head left and right in the horizontal direction. As the user turns the head left and right in the horizontal direction, the change in posture of the head mounted device 100 or the change in posture of the head of the user can be represented by a yaw angle.
For example, the image 301 shown in fig. 4A is an image corresponding to three-dimensional scene data. The image corresponding to the three-dimensional scene data is larger than the image displayed by the head-mounted device 100 at one time.
When the user wears the head-mounted device 100 and views an image in the head-mounted device 100 with his head facing forward, the yaw angle measured by the head-mounted device 100 is zero, and then the image displayed by the head-mounted device 100 is an image within the marker box 302 shown in fig. 4A.
If the user's head rotates by α (for example, 30 degrees) in the horizontal direction, the yaw angle measured by the head-mounted device 100 is α (for example, 30 degrees), and then the image in the marking frame 303 shown in fig. 4B is rendered and displayed according to the original three-dimensional scene data and α. If the user's head rotates a (e.g., 30 degrees) left in the horizontal direction, the yaw angle measured by the head-mounted device 100 is-a (e.g., -30 degrees), the head-mounted device 100 processes the image according to the original three-dimensional scene data and a, and obtains and displays the image in the marking frame 304 shown in fig. 4C.
When a user rides the vehicle, the running state of the vehicle may affect the posture of the user's head. That is, the actual change in posture of the head while the user is in the running vehicle includes a change in posture caused by the user actively turning the head, and a passive change in posture of the head of the user due to the change in posture of the vehicle. The actual posture change of the head is the posture change of the head-mounted device 100. The change in posture caused by the user actively turning the head is also referred to as active change in the user's head. However, the user expresses that he wants to change the image displayed by the head-mounted device 100 by actively rotating his head. In other words, the active change of the user's head is the user expressing such intent: it is desirable to change the image displayed by the headset 100. However, when the head of the user is passively changed due to a change in the posture of the vehicle, the user does not want to change the image displayed by the head-mounted device 100. Accordingly, an image rendered based on only the gesture data and the three-dimensional scene data measured by the head-mounted device 100 may give a sense of deviation to the user, which is inconsistent with the user's real intention.
For example, as shown in fig. 5A, a schematic diagram of a vehicle posture and a user head posture is shown. Where 51 is the vehicle and 52 is the user's head. When both the vehicle and the user's head are facing in the same direction, the yaw angle of the vehicle is zero, i.e. the yaw angle of the user's head pose being passively changed is zero. The user does not actively turn the head either, i.e. the actively changing yaw angle of the user's active pose is zero. The yaw angle measured by the headset 100 is also zero. At this time, the head-mounted device 100 displays an image as in the mark box 302 in fig. 4A.
For example, as shown in fig. 5B, when the vehicle undergoes a change of yaw angle α, that is, the user's head posture is passively changed to yaw angle α. At this time, if the user does not actively rotate the head, i.e., the actively changing yaw angle of the user's head is zero. Then the yaw angle measured by the headset 100 is also alpha. If the prior art is adopted, the obtained image is rendered according to the yaw angle measured by the head-mounted device 100 and the three-dimensional scene data, such as the image in the marking box 303 in fig. 4B. In practice, however, the user does not actively turn the head, i.e., the user does not want to change the image of the head mounted device 100, i.e., the user wishes to see the image within the marker box 302 as shown in fig. 4A. In this way, the user is given the perception that the image displayed by the head mounted device 100 is offset, i.e. the image the user wishes to see is located to the left of the display screen of the head mounted device 100.
As another example, as shown in fig. 5C, when the vehicle undergoes a change of yaw angle α, that is, the user's head posture is passively changed to yaw angle α. At this time, the user actively turns the head, i.e., the actively changing yaw angle of the user's head is β. The yaw angle measured by the headset 100 is α+β. If the prior art is adopted, the obtained image is rendered according to the yaw angle α+β measured by the head-mounted device 100 and the three-dimensional scene data, such as the image in the marking box 601 shown in fig. 6A. In practice, however, the user actively turns the yaw angle of the head by β, which is the image corresponding to the yaw angle β in the image 301 the user wants to see, that is, the image in the mark frame 602 shown in fig. 6B. In this way, the user is given the perception that the image displayed by the head mounted device 100 is offset, i.e. the image the user wishes to see is located to the left of the display screen of the head mounted device 100.
For this purpose, the embodiment of the present application proposes that the gesture data of the vehicle is cancelled out from the gesture data measured by the head-mounted device 100, so as to obtain the gesture data of the active change of the head of the user. And rendering according to the actively changed gesture data of the head of the user and the original three-dimensional scene data to obtain an image hoped to be seen by the user. In particular implementations, the electronic device 200 may be utilized in a relatively stationary state in a vehicle, i.e., the pose of the electronic device 200 does not change as the user actively changes head pose. Then, the measured gesture data of the electronic device 200 corresponds to the running gesture of the vehicle, and also corresponds to the gesture data of the passive change of the gesture of the head of the user, namely, the first gesture data.
Here, the term "offset" in the posture data of the electronic device 200 is offset from the posture data of the head-mounted device 100, which may also be expressed as terms such as "subtract", "compensate", "remove", "filter", "subtract", and the like.
Wherein the electronic device 200 and the head mounted device 100 communicate with each other via a wired connection or a wireless connection (such as path 12 shown in fig. 1). The path 12 may employ, for example, bluetooth (BT), wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), zigbee (Zigbee), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared (IR), or general 2.4G/5G wireless communication technology, etc.
The head-mounted device 100 may be worn on the head of the user in a helmet manner or may be worn on the eyes of the user in an eyeglass manner, and the specific form of the head-mounted device 100 is not limited in this embodiment.
Fig. 2 shows a schematic structural diagram of the head-mounted device 100.
The head mounted device 100 may include a processor 110, an external memory interface 120, an internal memory 150, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, a wireless communication module 160, an audio module 170, a speaker 170A, a microphone 170C, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and the like. The sensor module 180 may include a gyro sensor 180A, a magnetic sensor 180B, an acceleration sensor 180C, and the like.
It will be appreciated that the illustrated structure of the embodiments of the present invention does not constitute a specific limitation on the described headset 100. In other embodiments of the present application, the headset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thus improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The internal memory 150 may be used to store computer executable program code including instructions. The internal memory 150 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the headset 100 (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 150 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), etc. The processor 110 performs various functional applications and data processing of the head mounted device 100 by executing instructions stored in the internal memory 150, and/or instructions stored in a memory provided in the processor.
In some embodiments of the present application, the head-mounted device 100 may be of a unitary design, i.e. the head-mounted device 100 itself comprises the processor 110 and the internal memory 150 for implementing relevant data processing tasks. For example, based on the gesture data (i.e., the second gesture data) of the head-mounted device 100 acquired by the sensor module 180 and the received gesture data (i.e., the first gesture data) of the electronic device 200, actively changed gesture data (i.e., the third gesture data) of the head of the user is calculated, then, based on the third gesture data and the stored three-dimensional scene data, an image to be displayed is rendered, displayed by the display screen 194 of the head-mounted device 100, and so on.
It should be noted that, in an embodiment, the internal memory 150 of the head-mounted device 100 stores three-dimensional scene data of a related application. For example, if the headset 100 is available to play VR video, the three-dimensional scene data includes VR video data in three dimensions. If the headset 100 is available for VR games, the three-dimensional scene data includes three-dimensional game data (character data, scene data, base terrain data, etc.). In other examples, three-dimensional scene data of the relevant application may also be stored in external memory through external memory interface 120.
In other embodiments of the present application, the headset 100 may be of a split design, i.e. the headset 100 is configured to acquire its second pose data and to display one or more rendered images. While the relevant data processing effort is wholly or partially submitted to processing by other devices, such as electronic device 200. For example, the head-mounted device 100 may send the acquired second posture data to the electronic device 200, and the electronic device 200 calculates the third posture data according to the second posture data and the first posture data acquired by itself, that is, the posture data of the electronic device 200. Then, the electronic device 200 renders an image or the like to be displayed according to the third pose data and the stored three-dimensional scene data. The electronic device 200 transmits an image to be displayed to the head-mounted device 100, and the image is displayed by the head-mounted device 100. In some examples, the head mounted device 100 may also not include the processor 110 and/or the internal memory 150.
In another embodiment of the present application, the head-mounted device 100 may send the acquired second gesture data to another device a, which is not the electronic device 200. And the electronic device 200 also transmits the acquired first gesture data to the device a, and the device a calculates third gesture data according to the received first gesture data and second gesture data. Then, the device a renders an image to be displayed according to the third gesture data and the stored three-dimensional scene data, and then sends the image to be displayed to the head-mounted device 100, and the image is displayed by the head-mounted device 100.
It should be noted that, in the embodiment, the three-dimensional scene data of the related application is stored on the device that performs the image rendering work, that is, the electronic device 200 or the device a.
The USB interface 130 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc. The USB interface 130 may be used to connect a charger to charge the head-mounted device 100, or may be used to transfer data between the head-mounted device 100 and a peripheral device (e.g., the electronic device 200). And can also be used for connecting with a headset, and playing audio through the headset.
For example, when the head-mounted device 100 is configured in a split type, the USB interface 130 may be specifically configured to connect to the electronic device 200 or the device a, to transmit the second gesture data, and to receive the rendered image to be displayed.
For another example, when the headset 100 is configured as a one-piece design, the USB interface 130 may also be specifically configured to connect to a handle for receiving a user's operation.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the head-mounted device 100. In other embodiments of the present application, the headset 100 may also use different interfacing manners, or a combination of interfacing manners, as in the above embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the headset 100. The charging management module 140 may also supply power to the mobile terminal through the power management module 141 while charging the battery 142.
The power management module 141 is configured to connect the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 150, the display 194, the camera 193, the wireless communication module 160, etc. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, a power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the headset 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The head mounted device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used for displaying images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the headset 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some embodiments of the present application, the headset 100 includes two display screens, one of which is positioned in front of the left eye for viewing by the left eye. The other display screen is positioned in front of the right eye for the right eye to watch. The images displayed by the two display screens have a certain visual angle difference, and the visual angle difference is consistent with the visual angle difference of two eyes of a person, so that a user can see a three-dimensional effect.
It may be appreciated that, for the image displayed on any one of the multiple display screens, the method provided by the embodiment of the present application may be adopted to correct the acquired posture data of the head-mounted device 100, that is, cancel the passively changed posture data of the head of the user, render the image according to the corrected posture data, and display the image on the corresponding display screen.
In some examples, the headset 100 may implement shooting functions through an ISP, the camera 193, a video codec, a GPU, a display 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the headset 100 may include 1 or N of the cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the headset 100 is in a bin selection, the digital signal processor is configured to fourier transform the bin energy, etc.
The video codec is used to compress or decompress digital video. The headset 100 may support one or more of the video codecs. In this way, the headset 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Smart awareness of the headset 100 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the head mounted device 100. An external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are stored in an external memory card.
The headset 100 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and an application processor, etc. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The headset 100 may listen to music, or to hands-free conversations, through the speaker 170A.
The microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, a user can sound near the microphone 170C through the mouth, and input a sound signal to the microphone 170C. The headset 100 may be provided with at least one of the microphones 170C. In other embodiments, the headset 100 may be provided with two microphones 170C, and may perform a noise reduction function in addition to collecting sound signals. In other embodiments, three, four or more microphones 170C may be provided to the headset 100 to collect sound signals, reduce noise, identify the source of sound, perform directional recording functions, etc.
The keys 190 include a power-on key, a volume key, etc. The key 190 may be a mechanical key. Or may be a touch key. The headset 100 may receive key inputs, generating key signal inputs related to user settings and function control of the headset 100.
The indicator 192 may be an indicator light, which may be used to indicate a state of charge, a change in charge, an indication message, a missed call, a notification, etc.
For example, the electronic device 200 in the embodiment of the present application may be a mobile phone, a tablet computer, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable electronic device, or the like, and the specific form of the electronic device is not particularly limited in the present application.
The electronic device 200 may include a processor, an external memory interface, an internal memory, a USB interface 130, a charge management module, a power management module, a battery, a mobile communication module, a wireless communication module, an audio module, a speaker, a receiver, a microphone, an earphone interface, a sensor module, keys, a motor, an indicator, a camera, a display screen, a subscriber identity module (subscriber identification module, SIM) card interface, and the like.
The sensor module may include a motion sensor for measuring gesture data of the electronic device 200, i.e. second gesture data. Examples of the motion sensor include a gyro sensor, an air pressure sensor, and a magnetic sensor. In this application, the electronic device 200 is placed in a relatively stable position while the user is riding the vehicle, such as on a table top of the vehicle, in a pocket behind a seat, or other support that is capable of securing the electronic device 200, etc. This is because the second posture data acquired by the electronic device 200 corresponds to the amount of posture change caused to the head-mounted device 100 when the vehicle is running, when the electronic device 200 is in a steady state on the vehicle.
Of course, the electronic device 200 may also include other sensors, such as acceleration sensors, distance sensors, pressure sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, and the like.
The mobile communication module may provide a solution including wireless communication such as 2G/3G/4G/5G applied to the electronic device 200. The mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module can receive electromagnetic waves by the antenna, filter, amplify and the like the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module can amplify the signal modulated by the modulation and demodulation processor and convert the signal into electromagnetic waves to radiate through the antenna. In some embodiments, at least part of the functional modules of the mobile communication module may be provided in the processor. In some embodiments, at least part of the functional modules of the mobile communication module may be provided in the same device as at least part of the modules of the processor.
Other modules of the electronic device 200 may refer to the descriptions of related modules in the head-mounted device 100, which are not described herein.
It should be understood that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the electronic device 200. In other embodiments of the present application, electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The technical solutions involved in the following embodiments may be implemented in the head-mounted device 100 and the electronic device 200 having the above-described architecture. The following describes in detail the technical solutions provided in the embodiments of the present application with reference to specific scenarios and accompanying drawings.
In some embodiments of the present application, the headset 100 is of a split design.
Illustratively, the headset 100 (e.g., VR glasses, VR headset, etc.) may be used in conjunction with an electronic device 200 (e.g., cell phone, computer, etc.). Wherein, the head-mounted device 100 acquires its own gesture data, namely, the second gesture data, and transmits the second gesture data to the electronic device 200. The electronic device 200 acquires its own attitude data, i.e., first attitude data, and is responsible for data processing work. For example, third pose data is calculated from the second pose data and the first pose data, an image to be displayed by the head mounted device 100 is rendered from the third pose data and the three-dimensional scene data, and so on.
Hereinafter, VR glasses are taken as the head-mounted device 100, and a mobile phone is taken as the electronic device 200 for example.
Firstly, the VR glasses and the mobile phone are connected in a communication way, for example, a wired connection is established through a data wire, or a wireless connection can be established in a Wi-Fi way, etc.
The VR glasses may display a user interface 700 as in fig. 7A on a display screen. The user can see the corresponding interface from the display screen of the VR glasses by wearing the VR glasses. The user interface 700 may include a status bar 701, one or more recommended application icons 702, one or more shortcut-initiated application icons 703. The status bar 701 may display information such as the mobile phone power, time, connection or disconnection status of the handle, and network status. Recommended applications icon 702 is the icon corresponding to the recommended latest, hottest VR application, for example: "tear down team" game application icons, "hire" game application icons, "beautiful star" movie icons, and the like. The user can also view more recommended application icons by turning the screen left and right. The quick-start application icon 703 is a default or user-set resident application icon through which a user can quickly start a corresponding application, such as an application marketplace, VR phone drop, content library, settings, and the like. The user interface 700 may also include icons or the like of recently used applications. The content of the user interface 700 is not particularly limited in the embodiments of the present application. In one example, after the VR glasses and the handset establish a communication connection, the handset may rest on the screen.
In one example, a cursor may also be displayed in the interface displayed by the VR glasses, and the user may use the mobile phone or a handle connected to the mobile phone to manipulate the display interface of the VR glasses in conjunction with the position of the cursor. Similarly, the user uses a mouse to manipulate the computer display interface. Fig. 7B is a schematic structural diagram of a handle 300 according to an embodiment of the present application. The handle 300 includes a touch pad 301, a back key, a ok key, and a volume key. The user may implement operations such as moving a cursor, clicking the cursor, double-clicking the cursor, dragging the cursor, etc. through the touch pad 301. In another example, the VR glasses display interface may not display a cursor, and the user may use the mobile phone or a handle connected to the mobile phone to perform up, down, left, and back operations for switching the selected icon or control in the VR glasses display interface. Similarly, the remote control controls the manipulation of options in the television menu. The control method of the VR glasses is not limited in the embodiment of the application.
The handle manipulation is exemplified here. On the user interface 700, the user may move to the position of the "set" application icon via the cursor using the handle, and click the touch pad 301 or press the ok key, i.e., select the "set" application icon. Alternatively, the "set" application icon may indicate to the user that the application icon has been selected by changing its color or border thickening or by using an animation or the like. In response to the "set" application icon being selected, the VR glasses display a setup interface 704 as shown in fig. 7C. Functional controls for travel mode, screen size, cinema lighting, etc. may be included in the setup interface 704.
Further, the user may select the travel mode control, and select to open the travel mode, for example: an interface 705 as shown in fig. 7D is displayed. The interface 705 may have a prompt 706 displayed therein to prompt the user to keep the handset in a steady state, such as placing the handset in a small table or pocket behind a front seat, etc. It can be appreciated that after the mobile phone travel mode is started, the posture data of the VR glasses need to be corrected according to the posture data of the mobile phone. And rendering three-dimensional scene data according to the corrected posture data to obtain an image displayed by the VR glasses, so that the problem of image offset in the background art is avoided.
In other examples, the travel mode control may also be displayed on other interfaces, such as on user interface 700, which facilitates the user to quickly initiate travel modes.
Of course, VR glasses can also prompt the user by playing voice. Or the corresponding prompt information is displayed or the voice prompt is played through the mobile phone. Or when the mobile phone is detected to be in an unstable state, prompting the user to keep the mobile phone in a stable state and the like. The embodiment of the application does not limit the prompting mode or the prompting time.
The above examples illustrate the case where the user manually opens the travel mode, and in other examples, the travel mode may be automatically opened when the user is determined to be riding on the vehicle according to the sensor data of the VR glasses or the sensor data of the mobile phone. For example, when the acceleration obtained by the acceleration sensor of the VR glasses or the mobile phone is greater than a preset value (for example, 15 km/h), or the mobile phone obtains that the moving speed of the user position is greater than the preset value. For another example, the mobile phone may automatically start a travel mode according to travel information of the user. The trip information can be train ticket information, airplane ticket information, schedule information and the like. The conditions for automatically starting the travel mode are not particularly limited.
After the travel mode is turned on, the cell phone is placed in a position that is capable of maintaining a steady state, i.e., a state that is relatively stationary with the vehicle. For example, on a table top of a vehicle, or in a pocket behind a seat, or other support that is capable of securing the electronic device 200, etc. From the foregoing analysis, the mobile phone gesture obtained by the mobile phone can be considered to be equivalent to the running gesture of the vehicle.
As shown in fig. 8A, a flowchart of an image rendering method provided in an embodiment of the present application includes steps S801 to S805, which are specifically as follows:
s801, the mobile phone acquires first gesture data of the mobile phone; the VR glasses acquire second pose data of the VR glasses.
In some embodiments of the present application, the first gesture data may be used to characterize a current gesture of the mobile phone. For example, the first pose data may include euler angles of the handset. The euler angles may include pitch angle (pitch), yaw angle (yaw), and roll angle (roll), among others.
In some embodiments of the present application, the first gesture data may also be other data that may be used to characterize the current gesture of the mobile phone, such as a quaternion, a rotation matrix, and the like, which is not limited in the embodiments of the present application.
It should be noted that the euler angles, quaternions and rotation matrices can be mutually converted. The manner of conversion between several parameters is given below.
A. The euler angles are converted into a rotation matrix M.
Assume that: the yaw angle in Euler angles is yaw, the pitch angle is pitch, and the roll angle is roll.
Then the first time period of the first time period,
wherein c1=cos yw, s1=sin yw, c2=cos pitch, s2=sin pitch, c3=cos roll, s3=sin roll.
B. The rotation matrix M is converted into euler angles.
Assume that
Then, yaw=atan2 (m 13 ,m 33 );pitch=arcsin(-m 33 );roll=atan2(m 21 ,m 22 )。
C. The euler angle is converted to a quaternion q.
Wherein,
D. the quaternion q is converted to euler angles.
Assuming that quaternion q= [ x y z w ],
then, yaw=atan2 (2 (wx+yz), 1-2 (x 2 +y 2 ));pitch=arcsin(2(wy-xz));roll=atan2(2(wz+xy),1-2(y 2 +z 2 ))。
E. The quaternion q is converted into a rotation matrix M.
Assuming that quaternion q= [ x y z w ],
then the first time period of the first time period,
F. the rotation matrix M is converted into quaternions.
Assume that
Then the first time period of the first time period,/>
in some examples, the handset is configured with a motion sensor that can be used to obtain pose data for the handset. The motion sensor may include, for example, gyroscopes, accelerometers, and magnetometers, among others. By way of example, the angular velocity of the handset may be obtained by a gyroscope, the acceleration of the handset may be obtained by an accelerometer, and the geomagnetic field strength may be obtained by a magnetometer. Furthermore, the Euler angle and/or the quaternion of the mobile phone can be obtained through calculation according to the angular speed of the mobile phone, and the obtained Euler angle and/or quaternion of the mobile phone is corrected by using the acceleration and the geomagnetic field intensity of the mobile phone, so that the Euler angle and/or quaternion with higher precision is obtained. The first posture data may be sensor data that is not processed, for example, angular velocity, acceleration, geomagnetic field strength, and the like. The euler angle and/or quaternion obtained by calculation according to the acquired sensor data can also be used. Of course, the first posture data may include both unprocessed sensor data and calculated euler angles and/or quaternions, which are not limited in the embodiment of the present application.
It can be appreciated that, in the embodiment of the present application, corresponding gesture data may also be selected according to an actual scene. It will be appreciated that when using VR glasses, a user typically changes the frame displayed by the VR glasses by turning the head left and right or moving the head up and down. Then, the attitude data may consider only the yaw angle and the pitch angle. For another example, if the VR glasses are configured to change the displayed image only by rotating the head left and right, that is, if the user moves the head up and down without changing the image displayed by the VR glasses, the posture data may include only a yaw angle, which is not limited in the embodiment of the present application. In the following, the first posture data is taken as an example of the euler angle and/or the quaternion obtained after calculation.
Wherein the second pose data may be used to characterize a current pose of the head-mounted device. For example, the second pose data may include euler angles of the head-mounted device. The euler angles include pitch angle (pitch), yaw angle (yaw) and roll angle (roll), among others. Of course, the first gesture data may also be other data that may be used to characterize the current gesture of the headset, such as a quaternion, a rotation matrix, and the like, which is not limited in the embodiments of the present application. It should be noted that the euler angles, quaternions and rotation matrices can be mutually converted.
Similarly, VR glasses are also configured with motion sensors that can be used to obtain pose data for the VR glasses. The motion sensor may include, for example, gyroscopes, accelerometers, and magnetometers, among others. It should be noted that, the second gesture data may be sensor data of the VR glasses that are not processed, and after the subsequent mobile phone receives the sensor data, the euler angle and/or quaternion of the VR glasses may be calculated. The second gesture data may also be euler angles and/or quaternions calculated from the acquired sensor data. Of course, the second gesture data may also include both unprocessed sensor data and calculated euler angles and/or quaternions, which are not limited in particular in the embodiment of the present application.
Similarly, it can be seen from the foregoing description of the euler angles, which are the angles by which VR glasses rotate about coordinate axes. The rest of the content can refer to the related description of the gesture data of the mobile phone, and is not repeated here.
S802, the VR glasses send the acquired second gesture data to the mobile phone.
The VR glasses can send second gesture data acquired by the VR glasses to the mobile phone through communication connection, such as wired connection, or wireless connection, such as WIFI connection, between the VR glasses and the mobile phone.
S803, the mobile phone obtains third gesture data according to the first gesture data and the second gesture data.
From the above analysis, the first pose data corresponds to the operational pose of the vehicle and also corresponds to the passively changing pose data of the user's head. The second pose data is pose data of the VR glasses including actively changing pose data and passively changing pose data of the user's head. Because the last presented image of the VR glasses should be consistent with the actively changed pose data of the user's head, the mobile phone needs to cancel the first pose data from the second pose data to obtain the third pose data.
In some embodiments, if the pose data is euler angles, the second euler angles in the second pose data may be subtracted from the first euler angles in the first pose data to obtain third euler angles, i.e., third pose data.
For example, the case shown in fig. 5B will be described. First gesture data measured by the mobile phone: the yaw angle is alpha, and the second posture data measured by the VR glasses are as follows: the yaw angle is alpha. Then, the mobile phone may calculate third gesture data: the yaw angle is α - α=0.
For another example, the case shown in fig. 5C will be described. First gesture data measured by the mobile phone: the yaw angle is alpha, and the second posture data measured by the VR glasses are as follows: the yaw angle is alpha + beta. Then, the mobile phone may calculate third gesture data: the yaw angle is (α+β) - α=β, i.e., the yaw angle is β.
In other embodiments, if the gesture data is a quaternion, according to the calculation principle of the quaternion, the quaternion in the second gesture data may be multiplied by the quaternion in the first gesture data to obtain the quaternion of the third gesture data. Similarly, if the gesture data is a rotation matrix, the third gesture data is calculated according to the calculation principle of the matrix. The specific calculation process can refer to the related calculation principle in the prior art, and will not be described here.
It should be noted that the third gesture data may include any one or any combination of euler angles, quaternions, rotation matrices. And the Euler angles, quaternions and rotation matrices in the third gesture data can be mutually converted.
S804, the mobile phone processes the three-dimensional scene data according to the third gesture data to obtain a first image.
Wherein three-dimensional scene data can be understood as a set of parameters for describing a model of a three-dimensional scene of a related application. For example, if VR glasses are to play VR video, the three-dimensional scene data includes three-dimensional VR video data. If VR glasses are available for VR games, the three-dimensional scene data includes three-dimensional game data. Wherein the three-dimensional game data includes character data, scene data, basic topography data, and the like. When the VR glasses are designed in a split mode, three-dimensional scene data are stored in the mobile phone.
Illustratively, the handset projects a three-dimensional scene from the three-dimensional scene data to obtain a two-dimensional image, i.e., to obtain an original image, such as image 301 shown in fig. 4A. Both the Euler angle and the rotation matrix in the third pose data can be understood as the angle by which the user is actively turning the head. From computer graphics and computer image processing knowledge, an image is composed of pixels, each pixel corresponding to a coordinate and a color. Taking a pixel with a coordinate p and a color a in the original image as an example, the coordinate of a new pixel obtained after the rotation of the matrix M is p '=p×m, and the corresponding color becomes a'. And the like, performing rotation transformation on all pixels in the original image to obtain an image formed by new pixels, namely the first image.
In the embodiment of the present application, for example: when the user and the vehicle are in the state shown in fig. 5B, the original image is processed with the third posture data. The euler angle in the third pose data is 0, the rotation matrix M1 is the identity matrix I, the new pixel coordinate obtained by rotation is unchanged, that is, p' =p×m1=p×i=p, and the first image formed by the new pixels is offset by 0 degrees compared with the original image. Because the head angle is actively rotated by the user to be 0, the head angle and the head angle are matched, and the user cannot feel that the picture in the VR glasses drifts. For example, the VR glasses obtained by the method described in the present application display an image in a marking frame 801 in fig. 8B.
If the prior art is adopted, the original image is processed by using the second posture data, the posture angle in the second posture data is alpha, the rotation matrix is M2, the new pixel coordinate obtained by rotation is p' =p×m2, the color corresponding to the new pixel is also changed, and the first image formed by the new pixel is offset by alpha degrees compared with the original image. However, since the user actively rotates the head angle to 0, the two are not matched, so the user feels the picture drift. If the VR glasses obtained in the prior art are used, the image displayed in the mark box 802 in fig. 8C is shown.
In this embodiment of the present application, when the user and the vehicle are in the state shown in fig. 5C, the technology described in this application is used to process the original image with the third posture data, where the posture angle in the third posture data is β, the matrix M3 is rotated, the new pixel coordinate obtained by rotation is p' =p×m3, the color corresponding to the new pixel is also changed, and the first image formed by the new pixel is shifted by β degrees compared with the original image, but because the user actively rotates the head angle to be β, the two are matched, so that the user does not feel that the picture in the VR glasses drifts. As shown in fig. 8D, the image displayed by the VR glasses is the image in the mark frame 803 in fig. 8D.
If the prior art is adopted, the original image is processed by using the second posture data, the posture angle in the second posture data is alpha+beta, the rotation matrix is M4, the new pixel coordinate obtained by rotation is p' =p×m4, the corresponding color of the new pixel is also changed, the first image formed by the new pixel is offset by alpha+beta degrees compared with the original image, but because the head angle is beta when the user actively rotates, the two are not matched, the user feels the picture drift. The VR glasses of the prior art display an image as in the label box 804 in fig. 8E.
S805, the mobile phone sends the first image to the VR glasses.
S806, VR glasses display the first image.
In conclusion, the first image displayed by the VR glasses is obtained by deflecting the posture of the VR glasses according to the fact that the user actively rotates the head, and influences of the running posture of the vehicle on the first image are filtered.
Therefore, the gesture data of the mobile phone in a stable state in the vehicle are obtained, the gesture data are equivalent to the running gesture data of the vehicle, and the running gesture data of the vehicle are offset from the gesture data of the VR glasses, so that the gesture data of the head of the user is actively changed. And rendering the image finally displayed by the VR glasses according to the gesture data of the head actively changed by the user, keeping consistent with the intention of the user, avoiding the sense of image deviation of the user, and improving the VR experience of the user.
Further, in the process that the user uses the VR glasses, the user may accidentally pull the mobile phone, or the vehicle suddenly changes direction greatly, so that the gesture data of the mobile phone may be greatly deflected, and the picture displayed by the VR glasses is also greatly deflected, so that the use experience of the user is affected. For this reason, before calculating the third attitude data using the first attitude data and the second attitude data, it may be determined whether the cellular phone is greatly deflected based on the first attitude data. If the deflection occurs to a large extent, the mobile phone is considered to be possibly accidentally pulled or the transportation means changes direction to a large extent, and the first gesture data cannot be used. I.e. the first pose data cannot be cancelled out from the second pose data. At this time, the second gesture data and the stored three-dimensional scene data are required to be directly used for rendering to obtain the first image, so that the image displayed by the VR glasses is prevented from being greatly deflected. If the mobile phone is not deflected greatly, the mobile phone is considered to be in a stable state, and the first gesture data can be used for adjusting the second gesture data to obtain third gesture data.
The first gesture data may include, for example, an angular velocity of the handset. Then determining if the handset is deflected significantly may be by determining if the angular velocity of the handset is greater than a threshold value of 1 (e.g., 50 degrees/second). If the angular velocity of the mobile phone is greater than the threshold value 1, the mobile phone is considered to deflect greatly. If the angular velocity of the mobile phone is not greater than the threshold value 1, the mobile phone is considered not to deflect greatly.
Therefore, in the process that the user uses the VR glasses, whether the angular speed of the mobile phone exceeds the speed is judged according to the first gesture data. Under the condition that the angular speed of the mobile phone does not overspeed, the first gesture data are counteracted from the second gesture data, so that when the mobile phone deflects greatly, the picture of the VR glasses deflects greatly, and VR experience of a user is further improved.
Still further, after the mobile phone receives the second gesture data of the VR glasses, it may also be determined whether the VR glasses are in a stable state, i.e. the VR glasses do not deflect or deflect less, which may be ignored. If the VR glasses are in a stable state, the mobile phone is basically in a stable state, the value of the first gesture data is zero or smaller, the influence on the second gesture data is not great, and the first gesture data can be canceled from the second gesture data. If the VR glasses are in an unstable state, the mobile phone is also in an unstable state basically, the value of the second gesture data of the mobile phone is larger, and the influence on the second gesture data is also larger, so that the first gesture data needs to be offset from the second gesture data.
The second pose data includes acceleration and angular velocity, for example. Then determining whether the VR glasses are in a steady state may be accomplished by determining whether the acceleration of the VR glasses is less than or equal to a threshold value of 2 (e.g., 0.5 meters/second) and whether the angular velocity of the VR glasses is less than or equal to a threshold value of 3 (e.g., 8 degrees/second). If the acceleration of the VR glasses is less than or equal to a threshold of 2 (e.g., 0.5 meters/second) and the angular velocity of the VR glasses is less than or equal to a threshold of 3 (e.g., 8 degrees/second), then the VR glasses are considered to be in a steady state. If the acceleration of the VR glasses is greater than a threshold value of 2, or the angular velocity of the VR glasses is greater than a threshold value of 3, the VR glasses are considered to be in an unstable state.
Therefore, in the process of using the VR glasses by the user, whether the VR glasses are in a stable state can be judged. If the VR glasses are in an unstable state, calculating gesture data of a user actively rotating the head according to gesture data of the VR glasses and gesture data of the mobile phone, and then rendering with the three-dimensional scene data to obtain a first image. If the VR glasses are in a stable state, the gesture data of the VR glasses and the three-dimensional scene data can be directly used for rendering to obtain a first image. In other words, when the VR glasses are in a stable state, the calculated amount of the mobile phone can be reduced, and the processing rate of the mobile phone is improved.
In other embodiments of the present application, the headset 100 is of unitary design.
Illustratively, the headset 100 (e.g., VR glasses, VR headset, etc.) may be used in conjunction with an electronic device 200 (e.g., cell phone, computer, etc.). The electronic device 200 is configured to obtain its own gesture data, i.e. the first gesture data, and send the first gesture data to the head-mounted device 100. The head-mounted device 100 acquires own posture data, namely second posture data, and is responsible for data processing work. For example, third pose data is calculated from the second pose data and the first pose data, an image to be displayed is rendered from the third pose data and the three-dimensional scene data, and the image is displayed, and so on.
The following description will also take VR glasses as the head-mounted device 100 and a mobile phone as the electronic device 200 as an example.
First, the travel mode is turned on. The manner in which the travel mode is started and the interface displayed by the VR glasses are the same as described above, and will not be described here again.
After the travel mode is started, the mobile phone is placed at a position capable of keeping a stable state (keeping a state relatively stationary with the vehicle), and according to the analysis, the gesture of the mobile phone obtained by the mobile phone can be considered to be identical to the running gesture of the vehicle.
As shown in fig. 9, a flowchart of another image rendering method provided in an embodiment of the present application includes step S901 to step S904, which is specifically as follows:
s901, acquiring first posture data by a mobile phone, and acquiring second posture data by VR glasses.
S902, the mobile phone transmits first gesture data to the VR glasses mode.
S903, the VR glasses obtain third posture data according to the first posture data and the second posture data.
And S904, the VR glasses process the three-dimensional scene data according to the obtained third gesture data to obtain a first image.
When the VR glasses are designed integrally, three-dimensional scene data are stored in the VR glasses. And processing the third gesture data obtained by calculation and the three-dimensional scene data by the VR glasses to obtain a first image displayed by the VR glasses.
S905, VR glasses display the first image.
Except for the difference in the execution bodies of some steps, the description of the relevant steps in fig. 8A may be referred to for the rest, and the details are not repeated here.
Embodiments of the present application also provide a chip system, as shown in fig. 10, comprising at least one processor 1401 and at least one interface circuit 1402. The processor 1401 and the interface circuit 1402 may be interconnected by wires. For example, interface circuit 1402 may be used to receive signals from other devices (e.g., memory). For another example, interface circuit 1402 may be used to send signals to other devices (e.g., processor 1401). Illustratively, the interface circuit 1402 may read instructions stored in the memory and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the apparatus to perform the various steps performed by the electronic device 200 (e.g. a cell phone) in the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
Embodiments of the present application also provide a chip system, as shown in fig. 11, which includes at least one processor 1501 and at least one interface circuit 1502. The processor 1501 and the interface circuit 1502 may be interconnected by wires. For example, interface circuit 1502 may be used to receive signals from other devices (e.g., memory). For another example, interface circuit 1502 may be used to send signals to other devices (e.g., processor 1501). Illustratively, the interface circuit 1502 may read instructions stored in the memory and send the instructions to the processor 1501. The instructions, when executed by the processor 1501, may cause an apparatus to perform the various steps performed by the headset 100 (e.g., VR glasses) in the above embodiments. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a device which is contained in the electronic equipment and has the function of realizing the behavior of the electronic equipment in any one of the methods of the embodiment. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a sensor module or unit, a communication module or unit, and a processing module or unit. The device may be a system-on-chip.
The embodiment of the application also provides a device, which is contained in the head-mounted equipment and has the function of realizing the head-mounted equipment in any one of the methods of the embodiment. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a sensor module or unit, a communication module or unit, and a processing module or unit. The device may be a system-on-chip.
Embodiments also provide a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described in any one of the possible implementations of the embodiments described above.
Embodiments also provide a computer readable storage medium comprising computer instructions which, when run on a head-mounted device, cause the head-mounted device to perform a method as described in any one of the possible implementations of the embodiments above.
The present embodiments also provide a computer program product which, when run on a computer, causes the computer to perform the method as described in any one of the possible implementations of the embodiments described above.
It will be appreciated that the above-described terminal, etc. may comprise hardware structures and/or software modules that perform the respective functions in order to achieve the above-described functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present invention.
The embodiment of the present application may divide the functional modules of the terminal and the like according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present invention, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. An image rendering method, characterized by being applied to a system including an electronic device and a head-mounted device, the method comprising:
the electronic equipment acquires first gesture data of the electronic equipment; the first gesture data are gesture data of passive change of the gesture of the head of the user caused by the change of the gesture of the vehicle;
the head-mounted device acquires second gesture data of the head-mounted device and sends the second gesture data to the electronic device; the second posture data comprise the first posture data and posture data of posture change caused by the active rotation of the head of the user;
the electronic device receives the second gesture data sent by the head-mounted device;
the electronic equipment obtains third gesture data according to the first gesture data and the second gesture data, wherein the third gesture data is gesture data of gesture change caused by the active rotation of the head of the user;
The electronic equipment processes the three-dimensional scene data according to the third gesture data to obtain a first image;
the electronic device sends the first image to the head-mounted device;
the head mounted device displays the first image.
2. The method of claim 1, wherein the first pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the head mounted device.
3. The method according to claim 1 or 2, wherein the electronic device obtains third gesture data according to the first gesture data and the second gesture data, and the method comprises:
if the electronic equipment judges that the first gesture data meets a first condition, the electronic equipment determines that the second gesture data is the third gesture data;
and if the electronic equipment judges that the first posture data does not meet the first condition, the electronic equipment determines to offset the first posture data from the second posture data, and the third posture data is obtained.
4. The method of claim 1 or 2, wherein the electronic device obtains third pose data from the first pose data and the second pose data, further comprising:
the electronic equipment judges whether the second gesture data meets a second condition or not;
if the second gesture data meets the second condition, the electronic device determines that the second gesture data is the third gesture data;
and if the second posture data does not meet the second condition, the electronic equipment determines to offset the first posture data from the second posture data, and obtains the third posture data.
5. The method of claim 3 or 4, wherein the electronic device determining to cancel the first pose data from the second pose data to obtain the third pose data comprises: and the electronic equipment determines the Euler angle of the head-mounted equipment minus the Euler angle of the electronic equipment to obtain the third gesture data.
6. The method according to any one of claims 1-5, wherein the electronic device processing the three-dimensional scene data according to the third pose data to obtain a first image, comprising:
The electronic equipment obtains a second image according to the three-dimensional scene data;
and the electronic equipment rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
7. An image rendering method, wherein the method is applied to an electronic device, the method comprising:
the electronic equipment acquires first gesture data of the electronic equipment and receives second gesture data of the head-mounted equipment, which is sent by the head-mounted equipment; the first gesture data are gesture data of passive change of the gesture of the head of the user caused by the change of the gesture of the vehicle; the second posture data comprise the first posture data and posture data of posture change caused by the active rotation of the head of the user;
the electronic equipment obtains third gesture data according to the first gesture data and the second gesture data; the third gesture data is gesture data of gesture change caused by the active rotation of the head of the user;
the electronic equipment processes the three-dimensional scene data according to the third gesture data to obtain a first image;
the electronic device sends the first image to the head-mounted device.
8. The method of claim 7, wherein the first pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device, and the second pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the head mounted device.
9. The method of claim 7 or 8, wherein the electronic device obtains third pose data from the first pose data and the second pose data, comprising:
if the electronic equipment judges that the first gesture data meets a first condition, the electronic equipment determines that the second gesture data is the third gesture data;
and if the electronic equipment judges that the first posture data does not meet the first condition, the electronic equipment determines to offset the first posture data from the second posture data, and the third posture data is obtained.
10. The method of claim 7 or 8, wherein the electronic device obtains third pose data from the first pose data and the second pose data, further comprising:
the electronic equipment judges whether the second gesture data meets a second condition or not;
If the second gesture data meets the second condition, the electronic device determines that the second gesture data is the third gesture data;
and if the second posture data does not meet the second condition, the electronic equipment determines to offset the first posture data from the second posture data, and obtains the third posture data.
11. The method of claim 9 or 10, wherein the electronic device determining to cancel the first pose data from the second pose data to obtain the third pose data comprises: and the electronic equipment determines the Euler angle of the head-mounted equipment minus the Euler angle of the electronic equipment to obtain the third gesture data.
12. The method according to any one of claims 7-11, wherein the electronic device processing the three-dimensional scene data according to the third pose data to obtain a first image, comprising:
the electronic equipment obtains a second image according to the three-dimensional scene data;
and the electronic equipment rotates the second image according to the rotation matrix in the third gesture data to obtain the first image.
13. An electronic device, comprising: a processor, a memory, a communication interface, the memory, the communication interface being coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to:
Acquiring first posture data of the electronic equipment, and receiving second posture data of the head-mounted equipment, which are sent by the head-mounted equipment; the first gesture data are gesture data of passive change of the gesture of the head of the user caused by the change of the gesture of the vehicle; the second posture data comprise the first posture data and posture data of posture change caused by the active rotation of the head of the user;
obtaining third posture data according to the first posture data and the second posture data; the third gesture data is gesture data of gesture change caused by the active rotation of the head of the user;
processing the three-dimensional scene data according to the third gesture data to obtain at least one first image;
the first image is sent to the head mounted device.
14. The electronic device of claim 13, wherein the first pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the electronic device and the second pose data comprises at least one of an euler angle, a quaternion, and a rotation matrix of the head mounted device.
15. The electronic device of claim 13 or 14, wherein the obtaining third gesture data from the first gesture data and the second gesture data comprises:
If the first gesture data is judged to meet the first condition, determining the second gesture data as the third gesture data;
and if the first posture data does not meet the first condition, determining to offset the first posture data from the second posture data, and obtaining the third posture data.
16. The electronic device of claim 13 or 14, wherein the electronic device obtains third pose data from the first pose data and the second pose data, further comprising:
judging whether the second gesture data meets a second condition;
if the second gesture data meets the second condition, determining that the second gesture data is the third gesture data;
and if the second posture data does not meet the second condition, determining to offset the first posture data from the second posture data, and obtaining the third posture data.
17. The electronic device of claim 15 or 16, wherein the determining to cancel the first pose data from the second pose data to obtain the third pose data comprises: and determining the Euler angle of the head-mounted device to subtract the Euler angle of the electronic device, so as to obtain the third gesture data.
18. The electronic device of any of claims 13-17, wherein the processing the three-dimensional scene data according to the third pose data to obtain a first image comprises:
obtaining a second image according to the three-dimensional scene data;
and rotating the second image according to the rotation matrix in the third gesture data to obtain the first image.
19. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the image rendering method of any one of claims 7-12.
20. A system on a chip comprising one or more processors that, when executing instructions, perform the image rendering method of any of claims 7-12.
CN202010066613.0A 2020-01-20 2020-01-20 Image rendering method, electronic equipment and system Active CN113223129B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010066613.0A CN113223129B (en) 2020-01-20 2020-01-20 Image rendering method, electronic equipment and system
PCT/CN2020/127599 WO2021147465A1 (en) 2020-01-20 2020-11-09 Image rendering method, electronic device, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010066613.0A CN113223129B (en) 2020-01-20 2020-01-20 Image rendering method, electronic equipment and system

Publications (2)

Publication Number Publication Date
CN113223129A CN113223129A (en) 2021-08-06
CN113223129B true CN113223129B (en) 2024-03-26

Family

ID=76992824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010066613.0A Active CN113223129B (en) 2020-01-20 2020-01-20 Image rendering method, electronic equipment and system

Country Status (2)

Country Link
CN (1) CN113223129B (en)
WO (1) WO2021147465A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461072A (en) * 2022-02-10 2022-05-10 湖北星纪时代科技有限公司 Display method, display device, electronic equipment and storage medium
CN117008711A (en) * 2022-04-29 2023-11-07 华为技术有限公司 Method and device for determining head posture
CN115988247B (en) * 2022-12-08 2023-10-20 小象智能(深圳)有限公司 XR vehicle-mounted video watching system and method
CN116204068B (en) * 2023-05-06 2023-08-04 蔚来汽车科技(安徽)有限公司 Augmented reality display device, display method thereof, vehicle, mobile terminal, and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850967A (en) * 2016-12-29 2017-06-13 深圳市宇恒互动科技开发有限公司 A kind of self adaptation screen display method, system and helmet
CN107820593A (en) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 A kind of virtual reality exchange method, apparatus and system
CN109992111A (en) * 2019-03-25 2019-07-09 联想(北京)有限公司 Augmented reality extended method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7172030B2 (en) * 2017-12-06 2022-11-16 富士フイルムビジネスイノベーション株式会社 Display device and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106850967A (en) * 2016-12-29 2017-06-13 深圳市宇恒互动科技开发有限公司 A kind of self adaptation screen display method, system and helmet
CN107820593A (en) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 A kind of virtual reality exchange method, apparatus and system
CN109992111A (en) * 2019-03-25 2019-07-09 联想(北京)有限公司 Augmented reality extended method and electronic equipment

Also Published As

Publication number Publication date
CN113223129A (en) 2021-08-06
WO2021147465A1 (en) 2021-07-29

Similar Documents

Publication Publication Date Title
CN109917956B (en) Method for controlling screen display and electronic equipment
CN113223129B (en) Image rendering method, electronic equipment and system
CN110488977B (en) Virtual reality display method, device and system and storage medium
CN110139028B (en) Image processing method and head-mounted display device
CN110427110B (en) Live broadcast method and device and live broadcast server
CN110557626B (en) Image display method and electronic equipment
CN112533017B (en) Live broadcast method, device, terminal and storage medium
US20160189411A1 (en) Image display system, image display apparatus, image display method, and non-transitory storage medium encoded with computer readable program
CN111028144B (en) Video face changing method and device and storage medium
CN111385514B (en) Portrait processing method and device and terminal
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
CN108848405B (en) Image processing method and device
US20210142516A1 (en) Method and electronic device for virtual interaction
CN112328091A (en) Barrage display method and device, terminal and storage medium
CN110956580A (en) Image face changing method and device, computer equipment and storage medium
WO2022199102A1 (en) Image processing method and device
CN111103975A (en) Display method, electronic equipment and system
CN110971840B (en) Video mapping method and device, computer equipment and storage medium
CN112717391B (en) Method, device, equipment and medium for displaying character names of virtual characters
CN112367533B (en) Interactive service processing method, device, equipment and computer readable storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN110312144B (en) Live broadcast method, device, terminal and storage medium
CN111768507A (en) Image fusion method and device, computer equipment and storage medium
CN116704080B (en) Blink animation generation method, device, equipment and storage medium
CN112835445B (en) Interaction method, device and system in virtual reality scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant