CN116631307A - Display method, intelligent wearable device, electronic device, device and storage medium - Google Patents

Display method, intelligent wearable device, electronic device, device and storage medium Download PDF

Info

Publication number
CN116631307A
CN116631307A CN202310384179.4A CN202310384179A CN116631307A CN 116631307 A CN116631307 A CN 116631307A CN 202310384179 A CN202310384179 A CN 202310384179A CN 116631307 A CN116631307 A CN 116631307A
Authority
CN
China
Prior art keywords
wearable device
intelligent wearable
sensor data
inertial sensor
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310384179.4A
Other languages
Chinese (zh)
Inventor
郝冬宁
向颖
蔡勇亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Xingji Meizu Technology Co ltd
Original Assignee
Hubei Xingji Meizu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Xingji Meizu Technology Co ltd filed Critical Hubei Xingji Meizu Technology Co ltd
Priority to CN202310384179.4A priority Critical patent/CN116631307A/en
Publication of CN116631307A publication Critical patent/CN116631307A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • G09G2340/0478Horizontal positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method, intelligent wearable equipment, electronic equipment, devices and storage media, which belong to the field of interaction, and the display method provided by the embodiment of the application comprises the following steps: acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device; determining the real gesture of the intelligent wearing equipment according to the original gesture of the intelligent wearing equipment relative to the vehicle, the first inertial sensor data and the second inertial sensor data; and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.

Description

Display method, intelligent wearable device, electronic device, device and storage medium
Technical Field
The present application relates to the field of interaction, and in particular, to a display method, an intelligent wearable device, an electronic device, a device, and a storage medium.
Background
When the intelligent glasses, such as AR (Augmented Reality, reality augmentation) glasses, are worn under a driving scene, the posture of the AR glasses worn by a user on the vehicle can be synchronously changed due to the posture change of the vehicle, such as the posture change caused by left turning, lane changing and other operations, so that all virtual scenes related to 3DOF (Degrees Of Freedom, degree of freedom) in the AR glasses can be forced to be changed, and the experience of the user wearing the AR glasses under the driving scene is affected.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides a display method, including:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and displaying the content provided by the intelligent wearable device in the rendering area, and determining the real gesture of the intelligent wearable device according to the original gesture difference between the intelligent wearable device and the vehicle, the first inertial sensor data and the second inertial sensor data.
In some embodiments, further comprising:
acquiring third inertial sensor data of the intelligent wearable device and fourth inertial sensor data of the vehicle in an initial state, wherein the initial state characterizes that the intelligent wearable device and the posture of the vehicle are not mutually associated;
determining a first original gesture of the intelligent wearable device according to the third inertial sensor data;
determining a second raw pose of the vehicle from the fourth inertial sensor data;
and determining the original posture of the intelligent wearable device relative to the vehicle according to the first original posture and the second original posture.
In some embodiments, the third inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data, and the fourth inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data.
In some embodiments, further comprising:
fusing the third inertial sensor data by adopting a sensor fusion algorithm to obtain a first original posture of the intelligent wearable device;
and fusing the fourth sensor data by adopting a sensor fusion algorithm to obtain a second original posture of the vehicle.
In some embodiments, further comprising:
fusing the first inertial sensor data by adopting a sensor fusion algorithm to obtain a first real gesture of the intelligent wearable device, wherein the first inertial sensor data of the intelligent wearable device comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
fusing the second inertial sensor data by adopting a sensor fusion algorithm to obtain a second real gesture of the vehicle, wherein the second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
and determining the real gesture of the intelligent wearable device according to the original gesture, the first real gesture and the second real gesture.
In some embodiments, further comprising:
the size of the rendering area is inversely proportional to the magnitude of the change in the real pose compared to the original pose, and the rendering area moves in the opposite direction of the real pose compared to the direction of the change in the original pose.
In a second aspect, an embodiment of the present application further provides a display apparatus, including:
the acquisition unit is used for acquiring first inertial sensor data of the intelligent wearable equipment and acquiring second inertial sensor data of the vehicle fused with the posture of the intelligent wearable equipment;
a first determining unit, configured to determine a true posture of the smart wearable device according to an original posture of the smart wearable device relative to the vehicle, the first inertial sensor data, and the second inertial sensor data;
and the rendering unit is used for determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
In a third aspect, an embodiment of the present application further provides an intelligent wearable device, including a processor, where the processor is configured to perform any one of the display methods described above.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the program to implement a display method according to any one of the above.
In a fifth aspect, embodiments of the present application also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method as described in any of the above.
In a sixth aspect, embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements a display method as described in any of the above.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a display method according to an embodiment of the application;
FIG. 2 is a schematic diagram of an interaction state between a smart wearable device and a vehicle according to an embodiment of the present application;
FIG. 3 is a second schematic diagram of an interaction state between a smart wearable device and a vehicle according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a vehicle driving scenario according to an embodiment of the present application;
FIG. 5 is a second schematic view of a driving scene of a vehicle according to an embodiment of the present application;
fig. 6 is a schematic view of a scene displayed in a rendering area of a smart wearable device according to an embodiment of the present application;
FIG. 7 is a second flow chart of a display method according to an embodiment of the application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
It should be noted that, when the user wears the smart glasses, for example, AR (Augmented Reality ) glasses, after the smart glasses remain stably turned on, the smart glasses will be displayed in front of the rendering area of the smart glasses in the current direction, when the posture of the user's head is in front of the rendering area, 100% of the content will be displayed in the rendering area, and when the posture of the user wears the smart glasses changes, the rendering area of the smart glasses will also change accordingly, for example, the user's head rotates leftwards, a part of the right area of the rendering area of the smart glasses will be shielded, this part of the area will not be rendered, and the larger the rotation angle is, the larger the part of the rendering area shielded will be until the content is completely invisible in the rendering area.
When the intelligent glasses are worn in a driving scene, due to the posture change of the vehicle, such as the posture change generated by left turning, lane changing and other operations, the posture of the AR glasses worn by the user on the vehicle can be forced to be synchronously changed, so that all virtual scenes related to 3DOF (Degrees Of Freedom, degree of freedom) in the AR glasses can be forced to be changed, and the experience of wearing the AR glasses in the driving scene by the user is seriously hindered.
Although AR glasses are exemplified herein, other types of smart glasses, such as VR (Virtual Reality) glasses, MR (Mediated Reality) glasses, XR (Extended Reality) glasses, and the like, are still contemplated by embodiments of the present application.
At present, other devices such as a camera and the like are installed in a vehicle to monitor the gesture deviation of a user relative to the vehicle in the vehicle, however, additional devices are required to be added in the mode, and the gesture calculation error is larger.
Therefore, the application provides a display method, intelligent wearing equipment, electronic equipment, a device and a storage medium, wherein first inertial sensor data of the intelligent wearing equipment are acquired, and second inertial sensor data of a vehicle fused with the posture of the intelligent wearing equipment are acquired; then determining the real gesture of the intelligent wearing equipment according to the original gesture of the intelligent wearing equipment relative to the vehicle, the first inertial sensor data and the second inertial sensor data; and finally, determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area. According to the application, under a driving scene, according to the original gesture of the intelligent wearing equipment relative to the vehicle, the intelligent wearing equipment and the inertial sensor data of the vehicle are fused, so that the gesture of the intelligent wearing equipment is avoided from being forced to change due to the gesture of the vehicle, and the 3DoF free interaction of drivers and passengers when wearing AR glasses in the vehicle is realized.
It should be noted that, the execution subject of the glasses gesture condition method may be an intelligent wearable device (for example, AR glasses), a vehicle, or an edge device in communication connection with the intelligent wearable device. The edge device may be a terminal, server, or other device having computing resources.
Fig. 1 is a flow chart of a display method according to an embodiment of the application. As shown in fig. 1, a display method is provided, which is applied to an intelligent wearable device, and includes the following steps: step 110, step 120 and step 130. The method flow steps are only one possible implementation of the application.
Step 110, acquiring first inertial sensor data of the intelligent wearable device and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
in this embodiment, when a user wears the intelligent wearable device in a driving scene to perform a reality augmentation operation, the interaction state between the intelligent wearable device and the vehicle may be mainly divided into two phases: an initial state and a fusion state.
Wherein, the initial state refers to that the postures of the intelligent wearing apparatus and the vehicle are not related to each other, as shown in fig. 2, before the wearer 20 of the intelligent wearing apparatus gets on the vehicle, the posture of the intelligent wearing apparatus is not affected by the posture of the vehicle 10, or the wearer 20 of the intelligent wearing apparatus gets on the vehicle, but the vehicle is in an inactive state, and the posture of the intelligent wearing apparatus is not affected by the posture of the vehicle 10.
The fusion state refers to that the postures of the intelligent wearable device and the vehicle are related to each other, for example, a wearer of the intelligent wearable device is completely seated on the vehicle, and the posture of the intelligent wearable device and the posture of the vehicle are fused during running of the vehicle.
The intelligent wearing equipment is characterized in that a wearer of the intelligent wearing equipment can be a driver and a passenger who drives a vehicle, the intelligent wearing equipment can also be a passenger who takes the vehicle, the intelligent wearing equipment and the vehicle can be connected with the vehicle in a wired or wireless mode, intelligent interaction requirements of the user for driving the vehicle to wear the wearing equipment are met, other modes such as Bluetooth, wi-Fi and the like are adopted, and one or more intelligent wearing equipment can be connected to the vehicle.
The inertial sensors, such as gyroscopes and accelerometers, for detecting the posture are disposed on the smart wearable device and the vehicle, so that the posture of the smart wearable device and the posture of the vehicle are calculated from the inertial sensor data.
In one example, the smart wearable device and the vehicle are configured with three inertial sensors, namely, an accelerometer, a gyroscope and a magnetometer, wherein the accelerometer is a micro-electromechanical system element and is used for measuring acceleration of the carrier in three directions (namely, front and back, left and right, up and down) of a three-dimensional space, the gyroscope is used for measuring angular velocity of the carrier in three directions of the three-dimensional space, the magnetometer is used for measuring magnetic components of the carrier in the three directions of the three-dimensional space, namely, first inertial sensor data of the smart wearable device comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data of the smart wearable device, and second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data of the vehicle.
In one illustration, to ensure accuracy of the attitude estimation, both the first inertial sensor data and the second inertial sensor data are acquired simultaneously.
Step 120, determining a real gesture of the intelligent wearable device according to an original gesture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data.
The original gesture of the intelligent wearable device relative to the vehicle refers to that a wearer of the intelligent wearable device enters the vehicle, when the intelligent wearable device keeps stable starting operation, as shown in fig. 3, the gesture of the vehicle is taken as a reference gesture (i.e. the direction pointed by a solid arrow), when the gesture of the intelligent wearable device relative to the vehicle (i.e. the direction pointed by a dotted arrow) is beta, the intelligent wearable device enters the stable starting operation state, and under the scene, the original gesture of the intelligent wearable device relative to the vehicle is beta.
The real gesture of the intelligent wearing device refers to the gesture of the user wearing the intelligent wearing device relative to the original gesture change under the driving scene.
And 130, determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
The display interface of the intelligent glasses is fixed relative to the intelligent glasses, and is normally positioned right in front of the intelligent wearing equipment, and the rendering area is an area for rendering content in the display area determined according to the real gesture of the intelligent glasses.
In one example, the whole display interface of the smart glasses is a rendering area when the smart wearable device is in an original posture (i.e., the posture of the smart wearable device in a stable startup running state), as shown in a drawing a in fig. 6, the grid area is the rendering area of the smart wearable device after the smart wearable device enters the startup running state, and then the rendering area of the smart wearable device is changed along with the change of the real posture of the smart wearable device.
In this embodiment, the content displayed in the rendering area may be implemented by calling a function or a component in the component library, for example, openGL, vulkan, unity, and other reality augmentation rendering component libraries display the content provided by the smart wearable device in the rendering area of the smart wearable device, which is not limited.
In one example, the size of the rendering region is inversely proportional to the magnitude of the change in the real pose as compared to the original pose, and the rendering region moves in the opposite direction of the real pose as compared to the direction of the change in the original pose.
For example, the grid area in the drawing a in fig. 6 is a rendering area of the smart wearable device in the original posture, after the smart wearable device enters the power-on operation, if the real posture of the smart wearable device rotates 15 ° to the right compared with the original posture (as shown in the drawing b in fig. 6), the rendering area will move a part to the left, a part of the left area in the display interface of the smart wearable device is not displayed with content (the blank area in the drawing b), and if the real posture of the smart wearable device rotates 15 ° to the left compared with the original posture (as shown in the drawing c in fig. 6), the rendering area will move a part to the right, and a part of the right area in the display interface of the smart wearable device is not displayed with content (the blank area in the drawing a).
It should be understood that, in this embodiment, all the content can be displayed in the rendering area in the display interface only when the head of the user is maintained in the original posture, and as the head of the user rotates in the original posture, the rendering area in the display interface decreases with increasing rotation angle until the rendering area is 0.
In this embodiment, under the condition that the postures of the intelligent wearable device and the vehicle are fused, the original posture of the intelligent wearable device relative to the vehicle is determined before the intelligent wearable device and the vehicle are fused (i.e., the initial state), and finally, the real posture of the intelligent wearable device is determined according to the original posture of the intelligent wearable device and the vehicle before the intelligent wearable device and the vehicle are fused, and the first inertial sensor data and the second inertial sensor data after the intelligent wearable device and the vehicle are fused, so that the display of the rendering area is performed by combining the posture data of the vehicle and the intelligent wearable device before and after the posture fusion, and the error judgment of the intelligent wearable device to the posture of the vehicle due to the posture of the vehicle is avoided.
In some embodiments:
fusing the first inertial sensor data by adopting a sensor fusion algorithm to obtain a first real gesture of the intelligent wearable device, wherein the first inertial sensor data of the intelligent wearable device comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
fusing the second inertial sensor data by adopting a sensor fusion algorithm to obtain a second real gesture of the vehicle, wherein the second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
and determining the real gesture of the intelligent wearable device according to the original gesture, the first real gesture and the second real gesture.
For example, the true pose E of the smart wearable device is obtained by the following formula Real world
E Real world =E 1 -E 2 -ΔE
Wherein E is 1 For the first real gesture, E 2 For the second true pose, Δe is the original pose.
Wherein, the gesture can use Euler angle E: (X, Y, Z), wherein X refers to pitch angle corresponding to X axis, Y refers to heading angle corresponding to Y axis, Z refers to roll angle corresponding to Z axis, and can be represented by other modes such as axis angle or quaternion, etc., without limitation.
As shown in fig. 4, the running direction of the vehicle is right in front of the ground coordinate system, the original posture of the intelligent wearable device relative to the vehicle is beta, after the posture of the intelligent wearable device is forced to change by alpha to the right with the vehicle, the first real posture of the intelligent wearable device is (beta+alpha), and the second real posture of the vehicle is alpha, the real posture of the intelligent wearable device can be measured to be 0 through the above formula, namely, the intelligent wearable device does not actually change in posture.
As shown in fig. 5, the running direction of the vehicle is right in front of the ground coordinate system, the original posture of the intelligent wearable device relative to the vehicle is beta, after the posture of the intelligent wearable device changes r rightwards in the running process of the vehicle towards the right, the first real posture of the intelligent wearable device is (beta+r), and the second real posture of the vehicle is 0, the real posture of the intelligent wearable device can be measured to be r through the above formula, namely, the posture of the intelligent wearable device changes r rightwards.
In one example, a sensor fusion algorithm is used to fuse three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data of the smart wearable device to obtain a first real pose E of the smart wearable device 1 : (X1, Y1, Z1) fusing the three-axis acceleration data, the three-axis angular velocity data and the three-axis magnetic component data of the vehicle by adopting a sensor fusion algorithm to obtain a second real attitude E of the vehicle 2 :(X2,Y2,Z2)。
The sensor fusion algorithm includes a complementary filtering algorithm, a Madgwick algorithm, and the like, which are not limited.
In the embodiment of the application, the first inertial sensor data of the intelligent wearing equipment are acquired firstly, the second inertial sensor data of the vehicle fused with the posture of the intelligent wearing equipment are acquired, and then the real posture of the intelligent wearing equipment is determined according to the original posture of the intelligent wearing equipment relative to the vehicle, the first inertial sensor data and the second inertial sensor data. According to the application, under a driving scene, according to the interaction state between the intelligent wearing equipment and the vehicle, the inertial sensor data of the intelligent wearing equipment and the vehicle are fused, so that the situation that the gesture of the intelligent wearing equipment is forced to change due to the gesture of the vehicle is avoided, and the 3DoF free interaction of driving personnel when wearing AR glasses in the vehicle is realized.
It should be noted that each embodiment of the present application may be freely combined, exchanged in order, or separately executed, and does not need to rely on or rely on a fixed execution sequence.
In some embodiments, further comprising:
acquiring third inertial sensor data of the intelligent wearable device and fourth inertial sensor data of the vehicle in an initial state, wherein the initial state characterizes that the intelligent wearable device and the posture of the vehicle are not mutually associated;
determining a first original gesture of the intelligent wearable device according to the third inertial sensor data;
determining a second original attitude of the vehicle according to the fourth inertial sensor data;
and determining the original posture of the intelligent wearable device relative to the vehicle according to the first original posture and the second original posture.
It can be appreciated that in the initial state, the postures of the intelligent wearable device and the vehicle may change continuously, so that the first original posture and the second original posture also change in real time, in this embodiment, in order to improve the accuracy of the original postures, the first original posture and the second original posture measured when the intelligent wearable device keeps stably starting up and running can be obtained, and therefore the original posture of the intelligent wearable device relative to the vehicle is determined according to the first original posture and the second original posture measured at this time.
In this embodiment, the euler angle E is adopted: (X, Y, Z) to represent the pose, then the original pose Δe of the smart wearable device relative to the vehicle is determined according to the following formula:
ΔE=E 1 *-E 2 *
wherein E is 1 * For the first original posture, E 2 * Is the second original pose.
In the embodiment, when the interaction state between the intelligent wearable device and the vehicle is in an initial state, determining a first original posture of the intelligent wearable device according to the third inertial sensor data; and determining a second original gesture of the vehicle according to the fourth inertial sensor data so as to realize the fusion of the gesture of the follow-up intelligent wearable device and the gesture of the vehicle.
In one example, the third inertial sensor data of the smart wearable device includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data of the smart wearable device, and the fourth inertial sensor data of the vehicle includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data of the vehicle.
After the interaction state between the intelligent wearable device and the vehicle enters the initial state, estimating a first original posture E of the intelligent wearable device in the initial state according to the currently measured triaxial acceleration data, triaxial angular velocity data and triaxial magnetic component data of the intelligent wearable device 1 *:(X 1 *,Y 1 *,Z 1 * ) Estimating a second original attitude E of the vehicle in an initial state according to the currently measured three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data of the vehicle 2 *:(X 2 *,Y 2 *,Z 2 * ) This results in the original pose Δe of the smart wearable device relative to the vehicle:
ΔE=E 1 *-E 2 *=(X 1 *-X 2 *,Y 1 *-Y 2 *,Z 1 *-Z 2 *)
wherein E is 1 * For the first original posture, E 2 * Is the second original pose.
In some embodiments, a sensor fusion algorithm may be used to fuse the third inertial sensor data to obtain a first raw pose of the smart wearable device; and fusing the fourth sensor data by adopting a sensor fusion algorithm to obtain a second original posture of the vehicle.
The sensor fusion algorithm includes a complementary filtering algorithm, a Madgwick algorithm, and the like, which are not limited.
In an example, taking an intelligent wearable device as an example, firstly, calculating a gesture 1 according to triaxial angular velocity data of the intelligent wearable device, then calculating a gesture 2 according to triaxial acceleration data and triaxial magnetic component data of the intelligent wearable device, and finally, carrying out weighted integration on the gesture 1 and the gesture 2, thereby correcting drift errors of a gyroscope through an accelerometer and a magnetometer, and further accurately estimating a first original gesture of the intelligent wearable device.
Referring to fig. 7, fig. 7 is a complete flow chart of a display method according to an embodiment of the application, including the following steps:
step 210a, acquiring first inertial sensor data of the AR glasses;
step 210b, acquiring second inertial sensor data of the vehicle;
step 220, determining the interaction state between the AR glasses and the vehicle as an initial state;
step 230, determining a first original posture of the AR glasses according to the first inertial sensor data; determining a second original attitude of the vehicle according to the second inertial sensor data;
step 240, determining the interaction state between the AR glasses and the vehicle as a fusion state;
step 250, determining an original posture difference between the AR glasses and the vehicle according to the first original posture and the second original posture;
step 260, determining the interaction state between the AR glasses and the vehicle as a use state;
step 270, acquiring a first real posture of the AR glasses and a second real posture of the vehicle; determining the real gesture of the AR glasses according to the first real gesture, the second real gesture and the original gesture difference;
and 280, displaying a target picture according to the real gesture of the AR glasses.
The embodiment of the application also provides an intelligent wearable device, which comprises a processor, wherein the processor is used for executing a display method, and the method comprises the following steps:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
Further, the processor is further configured to perform:
acquiring third inertial sensor data of the intelligent wearable device and fourth inertial sensor data of the vehicle in an initial state, wherein the initial state characterizes that the intelligent wearable device and the posture of the vehicle are not mutually associated;
determining a first original posture of the intelligent wearable device according to the third inertial sensor data, and determining a second original posture of the vehicle according to the fourth inertial sensor data;
and determining the original posture of the intelligent wearable device relative to the vehicle according to the first original posture and the second original posture.
Further, the third inertial sensor data of the intelligent wearable device includes three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data, and the fourth inertial sensor data includes three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data.
Further, the processor is further configured to perform:
fusing the third inertial sensor data by adopting a sensor fusion algorithm to obtain a first original posture of the intelligent wearable device;
and fusing the fourth sensor data by adopting a sensor fusion algorithm to obtain a second original posture of the vehicle.
Further, the processor is further configured to perform:
fusing the first inertial sensor data by adopting a sensor fusion algorithm to obtain a first real gesture of the intelligent wearable device, wherein the first inertial sensor data of the intelligent wearable device comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
fusing the second inertial sensor data by adopting a sensor fusion algorithm to obtain a second real gesture of the vehicle, wherein the second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
and determining the real gesture of the intelligent wearable device according to the original gesture, the first real gesture and the second real gesture.
Further, the processor is further configured to perform:
the size of the rendering area is inversely proportional to the magnitude of the change in the real pose compared to the original pose, and the rendering area moves in the opposite direction of the real pose compared to the direction of the change in the original pose.
The embodiment of the application also provides a display device, which comprises:
the acquisition unit is used for acquiring first inertial sensor data of the intelligent wearable equipment and acquiring second inertial sensor data of the vehicle fused with the posture of the intelligent wearable equipment;
the determining unit is used for determining the real gesture of the intelligent wearable device according to the original gesture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and the rendering unit is used for determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
In the embodiment of the application, under the condition that the postures of the intelligent wearing equipment and the vehicle are fused, the original postures of the intelligent wearing equipment relative to the vehicle are determined before the intelligent wearing equipment and the vehicle are fused (namely, the initial state), and finally, the true postures of the intelligent wearing equipment are determined according to the original postures of the intelligent wearing equipment and the vehicle before the intelligent wearing equipment and the vehicle are fused, and the first inertial sensor data and the second inertial sensor data after the postures of the intelligent wearing equipment and the vehicle are fused, so that the display of a rendering area is carried out by combining the posture data of the vehicle and the intelligent wearing equipment before and after the postures are fused, and the error judgment of the intelligent wearing equipment on the self posture due to the posture reasons of the vehicle is avoided.
In some embodiments, the acquiring unit is further configured to acquire third inertial sensor data of the smart wearable device and fourth inertial sensor data of the vehicle at an initial state, where the initial state characterizes that the gestures of the smart wearable device and the vehicle are not associated with each other; determining a first original pose of the intelligent wearable device according to the third inertial sensor data; determining a second raw pose of the vehicle from the fourth inertial sensor data; and determining the original posture of the intelligent wearable device relative to the vehicle according to the first original posture and the second original posture.
In some embodiments, the third inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data of the smart wearable device, and the fourth inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data.
In some embodiments, the acquiring unit is further configured to fuse the third inertial sensor data by using a sensor fusion algorithm to obtain a first original pose of the smart wearable device; and fusing the fourth sensor data by adopting a sensor fusion algorithm to obtain a second original posture of the vehicle.
In some embodiments, the determining unit is further configured to fuse the first inertial sensor data by using a sensor fusion algorithm to obtain a first real pose of the intelligent wearable device, where the first inertial sensor data of the intelligent wearable device includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data; fusing the second inertial sensor data by adopting a sensor fusion algorithm to obtain a second real gesture of the vehicle, wherein the second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data; and determining the real gesture of the intelligent wearable device according to the original gesture, the first real gesture and the second real gesture.
In some embodiments, the size of the rendering region is inversely proportional to the magnitude of the change in the real pose as compared to the original pose, the rendering region moving in the opposite direction of the real pose as compared to the direction of the change in the original pose.
Fig. 8 illustrates a physical structure diagram of an electronic device, as shown in fig. 8, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, and memory 830 communicate with each other via communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform a display method comprising:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, where the computer program, when executed by a processor, can perform the display method provided by the above method embodiments, and the method includes:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
In yet another aspect, the present application also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the display method provided by the above-described method embodiments, the method comprising:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A display method, characterized by being applied to an intelligent wearable device, comprising:
acquiring first inertial sensor data of the intelligent wearable device, and acquiring second inertial sensor data of a vehicle fused with the posture of the intelligent wearable device;
determining a true posture of the intelligent wearable device according to an original posture of the intelligent wearable device relative to the vehicle, the first inertial sensor data and the second inertial sensor data;
and determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
2. The display method according to claim 1, characterized by further comprising:
acquiring third inertial sensor data of the intelligent wearable device and fourth inertial sensor data of the vehicle in an initial state, wherein the initial state characterizes that the intelligent wearable device and the posture of the vehicle are not mutually associated;
determining a first original posture of the intelligent wearable device according to the third inertial sensor data, and determining a second original posture of the vehicle according to the fourth inertial sensor data;
and determining the original posture of the intelligent wearable device relative to the vehicle according to the first original posture and the second original posture.
3. The display method according to claim 2, wherein the third inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data, and the fourth inertial sensor data includes three-axis acceleration data, three-axis angular velocity data, and three-axis magnetic component data.
4. A display method according to claim 3, further comprising:
fusing the third inertial sensor data by adopting a sensor fusion algorithm to obtain a first original posture of the intelligent wearable device;
and fusing the fourth sensor data by adopting a sensor fusion algorithm to obtain a second original posture of the vehicle.
5. The display method according to claim 1, characterized by further comprising:
fusing the first inertial sensor data by adopting a sensor fusion algorithm to obtain a first real gesture of the intelligent wearable device, wherein the first inertial sensor data of the intelligent wearable device comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
fusing the second inertial sensor data by adopting a sensor fusion algorithm to obtain a second real gesture of the vehicle, wherein the second inertial sensor data of the vehicle comprises three-axis acceleration data, three-axis angular velocity data and three-axis magnetic component data;
and determining the real gesture of the intelligent wearable device according to the original gesture, the first real gesture and the second real gesture.
6. The display method according to any one of claims 1 to 5, characterized by further comprising:
the size of the rendering area is inversely proportional to the magnitude of the change in the real pose compared to the original pose, and the rendering area moves in the opposite direction of the real pose compared to the direction of the change in the original pose.
7. A display device, comprising:
the acquisition unit is used for acquiring first inertial sensor data of the intelligent wearable equipment and acquiring second inertial sensor data of the vehicle fused with the posture of the intelligent wearable equipment;
a first determining unit, configured to determine a true posture of the smart wearable device according to an original posture of the smart wearable device relative to the vehicle, the first inertial sensor data, and the second inertial sensor data;
and the rendering unit is used for determining a rendering area of the intelligent wearable device based on the real gesture, and displaying the content provided by the intelligent wearable device in the rendering area.
8. A smart wearable device comprising a processor for performing the display method of any of claims 1 to 6.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the display method of any one of claims 1 to 6 when the program is executed by the processor.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the display method according to any one of claims 1 to 6.
CN202310384179.4A 2023-04-10 2023-04-10 Display method, intelligent wearable device, electronic device, device and storage medium Pending CN116631307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310384179.4A CN116631307A (en) 2023-04-10 2023-04-10 Display method, intelligent wearable device, electronic device, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310384179.4A CN116631307A (en) 2023-04-10 2023-04-10 Display method, intelligent wearable device, electronic device, device and storage medium

Publications (1)

Publication Number Publication Date
CN116631307A true CN116631307A (en) 2023-08-22

Family

ID=87635472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310384179.4A Pending CN116631307A (en) 2023-04-10 2023-04-10 Display method, intelligent wearable device, electronic device, device and storage medium

Country Status (1)

Country Link
CN (1) CN116631307A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117294832A (en) * 2023-11-22 2023-12-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117294832A (en) * 2023-11-22 2023-12-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN117294832B (en) * 2023-11-22 2024-03-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US10732707B2 (en) Perception based predictive tracking for head mounted displays
EP3859495B1 (en) Systems and methods for tracking motion and gesture of heads and eyes
EP3000011B1 (en) Body-locked placement of augmented reality objects
US10409365B2 (en) Method of providing a virtual space image subjected to blurring processing based on detected displacement, and system therefor
EP2936060B1 (en) Display of separate computer vision based pose and inertial sensor based pose
US9684169B2 (en) Image processing apparatus and image processing method for viewpoint determination
US8760470B2 (en) Mixed reality presentation system
WO2017047150A1 (en) Method for providing virtual space, and program
US10559135B1 (en) Fixed holograms in mobile environments
US20150149111A1 (en) Device and method for using time rate of change of sensor data to determine device rotation
US11960635B2 (en) Virtual object display device and virtual object display method
CN108318027B (en) Method and device for determining attitude data of carrier
CN113534948A (en) Augmented reality AR device and method of predicting gestures therein
CN116631307A (en) Display method, intelligent wearable device, electronic device, device and storage medium
CN112967404A (en) Method and device for controlling movement of virtual object, electronic equipment and storage medium
CN114706489B (en) Virtual method, device, equipment and storage medium of input equipment
US8708818B2 (en) Display control system, display control method, computer-readable storage medium having stored thereon display control program, and display control apparatus
JP6582205B2 (en) Information processing apparatus, information processing apparatus program, head mounted display, and information processing system
CN106802716B (en) Data processing method of virtual reality terminal and virtual reality terminal
CN111489376B (en) Method, device, terminal equipment and storage medium for tracking interaction equipment
KR20180060403A (en) Control apparatus for drone based on image
CN110008856B (en) Positioning method and electronic equipment
WO2023113694A2 (en) Tracking system for simulating body motion
CN117270689A (en) Method and apparatus for improving interoperability accuracy
CN118295524A (en) Synchronous optimization method and system for gyroscope vision and experience vision in VR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination