CN115469739A - Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium - Google Patents

Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium Download PDF

Info

Publication number
CN115469739A
CN115469739A CN202210515348.9A CN202210515348A CN115469739A CN 115469739 A CN115469739 A CN 115469739A CN 202210515348 A CN202210515348 A CN 202210515348A CN 115469739 A CN115469739 A CN 115469739A
Authority
CN
China
Prior art keywords
controller
information
freedom
gesture
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210515348.9A
Other languages
Chinese (zh)
Inventor
丁彬
傅强
李政
帅一帆
范皓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202210515348.9A priority Critical patent/CN115469739A/en
Publication of CN115469739A publication Critical patent/CN115469739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a six-degree-of-freedom image generation method and device of a controller, computer equipment and a storage medium, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective hand-holding gesture associated with the controller; determining current first position information of the controller according to an environment picture containing a designated gesture; and generating a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the current first position information of the controller and the attitude information. Therefore, the position information of the gesture can be converted into the position information of the controller, the generated VR picture can contain the position information and the posture information of the controller under the condition of not using a displacement sensor, the visual experience requirement of the user with six degrees of freedom is met, and the power consumption of the controller can be reduced because the position sensor does not need to be arranged in the controller.

Description

Six-degree-of-freedom picture generation method, device, equipment and storage medium of controller
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a six-degree-of-freedom image generation method and apparatus for a controller, a computer device, and a storage medium.
Background
Some simple controllers can cooperate with vehicle VR (Virtual Reality) glasses to bring a very rich visual experience to the user. In the simple controller, a gyroscope is usually arranged in the simple controller to measure the posture of the controller, so that three-degree-of-freedom display images of the VR glasses on the controller are realized, and if the use requirements of six-degree-of-freedom of a user are met, the displacement of the controller can be obtained by arranging a position sensor in the controller, and the six-degree-of-freedom display images of the controller can be reflected.
However, the endurance time of the controller is reduced due to the fact that the position sensor consumes more power. Therefore, how to realize a screen capable of representing six degrees of freedom of the controller without including a position sensor is a problem to be solved at present.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
An embodiment of the first aspect of the present disclosure provides a six-degree-of-freedom picture generation method for a controller, including:
acquiring attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller;
determining current first position information of the controller according to the environment picture containing the designated gesture;
and generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the current first position information and the attitude information of the controller.
An embodiment of a second aspect of the present disclosure provides a six-degree-of-freedom picture generating apparatus for a controller, including:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring attitude information of a controller and an environment picture containing a designated gesture, and the designated gesture is an effective hand-held gesture associated with the controller;
the determining module is used for determining current first position information of the controller according to the environment picture containing the specified gesture;
and the first generating module is used for generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the current first position information and the attitude information of the controller.
An embodiment of a third aspect of the present disclosure provides a computer device, including: the present disclosure relates to a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the six-degree-of-freedom picture generation method of the controller according to the embodiment of the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing a computer program, which when executed by a processor implements a six-degree-of-freedom screen generation method for a controller as set forth in the first and/or second aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, which when executed by an instruction processor, executes the six-degree-of-freedom picture generation method of the controller provided in the first aspect of the present disclosure.
The six-degree-of-freedom image generation method and device of the controller, the computer equipment and the storage medium have the following beneficial effects:
in the embodiment of the disclosure, the car machine firstly obtains attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller, then determines current first position information of the controller according to the environment picture containing the designated gesture, and then generates a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the current first position information of the controller and the attitude information. Therefore, the position information of the gesture can be converted into the position information of the controller, the generated virtual reality VR picture can contain the position information and the posture information of the controller under the condition that a displacement sensor is not used, the visual experience requirement of a user with six degrees of freedom is met, and the position sensor does not need to be arranged in the controller, so that the power consumption of the controller can be reduced, and the long-time use of the controller is guaranteed.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The above and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a six-degree-of-freedom image generation method for a controller according to a first embodiment of the disclosure;
fig. 2 is a schematic flowchart of a six-degree-of-freedom image generation method of a controller according to a second embodiment of the disclosure;
fig. 3 is a schematic flowchart of a six-degree-of-freedom image generation method of a controller according to a third embodiment of the disclosure;
fig. 4 is a block diagram of a six-degree-of-freedom image generation method of a controller according to a fourth embodiment of the present disclosure;
fig. 5 is a block diagram of a six-degree-of-freedom image generation method of a controller according to a fifth embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
A six-degree-of-freedom screen generation method, apparatus, computer device, and storage medium of a controller of the embodiments of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a six-degree-of-freedom image generation method of a controller according to an embodiment of the present disclosure.
It can be noted that the execution subject of the six-degree-of-freedom picture generation method for a controller in this embodiment is a six-degree-of-freedom picture generation device for a controller, and the device may be implemented in a software and/or hardware manner, and the device may be configured in a server at a car end, that is, a car machine, and the following will describe the six-degree-of-freedom picture generation method for a controller proposed in this disclosure with the car machine as the execution subject, and is not limited herein.
As shown in fig. 1, the six-degree-of-freedom picture generation method of the controller may include the steps of:
step 101, acquiring posture information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective hand-holding posture associated with the controller.
The controller may be an electronic interactive device, which may include a bluetooth module, an NFC module, and other communication modules, so as to communicate with the vehicle, and further transmit electric quantity information, physical address information (MAC), form (type) information, model information, and the like of the controller, which are not limited herein. In addition, an attitude sensor such as a gyroscope can be further installed in the controller and used for collecting attitude information of the controller.
It should be noted that the controller in the present disclosure may be a simple controller, such as a ring controller, a watch-shaped controller, a ball-shaped controller, a handle-shaped controller, etc., and is not limited herein.
The attitude information may be obtained by measurement of an inertial measurement sensor of the controller, such as a gyroscope sensor, and may include data of acceleration, angular velocity, and the like of the controller, which may be a pitch angle and a heading angle, which are not limited herein.
The environment picture may be a picture of an environment in the in-vehicle device. It can be understood that the environment picture can be acquired by the camera, and the shooting angle of the camera can be different according to different acquisition devices. For example, if a camera is located on the VR device, such as VR glasses, a picture covering most of the area where the controller can move can be obtained from the head by a large wide-angle camera. And if the camera is positioned at a fixed position in the vehicle, a plurality of cameras are required to be arranged at related positions so as to ensure that the shot environment picture can contain all positions and angles of the controller as much as possible.
Wherein the designated gesture is a valid hand-held gesture associated with the controller.
It should be noted that each form of controller may correspond to a unique and effective hand-held gesture, that is, for any form of controller, the controller can only be used when the user operates the controller with the gesture corresponding to the form. For example, for a controller shaped as a ball, it can be held in a hand, and thus only when the gesture of the user for the controller is "holding the ball at the palm", the gesture can be determined to be the designated gesture corresponding to the type of controller. If the controller is shaped as a ring, the ring can be worn on the index finger and the thumb can be placed on the ring, so that the hand-held posture is taken as an effective hand-held posture for the controller, namely, a corresponding designated gesture when the controller is shaped as a ring, and the shape is not limited herein.
Optionally, the car machine may send the type of the controller to the VR device when receiving the type of the controller sent by the controller, and then receive gesture information of the controller returned by the VR device and the environment picture containing the designated gesture corresponding to the type of the controller.
It should be noted that the controller may communicate with the car machine and notify the car machine of the type of the current controller, that is, the form information, so that the car machine may send the VR device according to the type, so that the VR device can monitor the environment picture containing the designated gesture according to the type of the controller. The designated gesture is a designated gesture corresponding to the type of the current controller, so that the calculation force of the vehicle machine can be saved, and the real-time responsiveness can be improved.
Or the car machine can also receive the posture information and the type sent by the controller, and then send the type to a camera device in the car, so that the camera device collects and returns the environment picture corresponding to the type and containing the designated gesture.
It should be noted that the controller may also directly send the current posture information and type to the in-vehicle device, and the in-vehicle device may further identify, through the camera in the in-vehicle device, the environment picture corresponding to the type and including the designated gesture.
And 102, determining the current first position information of the controller according to the environment picture containing the designated gesture.
The first position information may be coordinate information of the controller in a coordinate system of a camera where the VR device is currently located, that is, a coordinate system established by taking an angle of a shot as a center.
It should be noted that, by processing an environment picture including a designated gesture based on any cv (computer vision) algorithm, three-dimensional space coordinates of the designated gesture in a camera coordinate system with the VR device as a coordinate origin may be obtained, and further, coordinate transformation may be performed on the three-dimensional space coordinates based on a known spatial relative position relationship between the designated gesture and the controller, so that coordinate information, that is, first position information, of the controller in the camera coordinate system where the VR device is located currently may be obtained.
And 103, generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the current first position information and the attitude information of the controller.
After determining the first position information of the controller in the camera coordinate system, the first position information may be adjusted to generate position coordinate information in the world coordinate system.
For example, the position parameter of the VR device in the world coordinate system may be determined first, and then the first position information may be subjected to coordinate transformation based on the position parameter of the VR device in the world coordinate system and the relative position relationship between the VR device and the controller to obtain the position coordinate of the controller in the world coordinate system.
And then, the car machine can render the acquired image shot by the VR equipment through a system with rendering capability based on the position coordinate and the posture information of the controller in a world coordinate system to generate a rendered virtual reality VR picture, so that the virtual reality VR picture can contain six-degree-of-freedom information of the controller.
It should be noted that the position coordinates of the controller in the world coordinate system include position information of the controller along X, Y, Z three cartesian axis directions, and the attitude information includes attitude information Pitch, yaw and Roll of the controller around X, Y, Z three cartesian axis directions, where Pitch is a Pitch angle rotating around an X axis, yaw is a Yaw angle rotating around a Y axis, and Roll is a Roll angle rotating around a Z axis. The positional information in the direction of the three rectangular axes X, Y, Z and the attitude information Pitch, yaw, and Roll in the direction of the three rectangular axes X, Y, Z are collectively referred to as six-degree-of-freedom information.
In the embodiment of the disclosure, the car machine firstly obtains attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller, then determines current first position information of the controller according to the environment picture containing the designated gesture, and then generates a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the current first position information of the controller and the attitude information. Therefore, the position information of the gesture can be converted into the position information of the controller, the generated virtual reality VR picture can contain the position information and the posture information of the controller under the condition that a displacement sensor is not used, the visual experience requirement of a user with six degrees of freedom is met, and the position sensor does not need to be arranged in the controller, so that the power consumption of the controller can be reduced, and the long-time use of the controller is guaranteed.
Fig. 2 is a flowchart illustrating a six-degree-of-freedom picture generation method of a controller according to a second embodiment of the present disclosure. As shown in fig. 2, the six-degree-of-freedom picture generation method of the controller may include the steps of:
step 201, obtaining the posture information of the controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective hand-holding posture associated with the controller.
It should be noted that, for a specific implementation manner of step 201, reference may be made to the foregoing embodiments, and details are not described herein.
Step 202, analyzing the environment picture containing the designated gesture to determine a first coordinate of the designated gesture in the environment picture under a reference coordinate system, wherein the reference coordinate system takes a VR device as an origin of coordinates.
The reference coordinate system may be a camera coordinate system, that is, a world coordinate system established with an angle of a shot as a center.
It should be noted that the first coordinate may be a three-dimensional coordinate of the designated gesture in the reference coordinate system, and by processing an environment screen containing the designated gesture based on any cv (computer vision) algorithm, a three-dimensional space coordinate of the designated gesture in the reference coordinate system with the VR device as a coordinate origin may be acquired.
And 203, performing coordinate transformation on the first coordinate based on the relative position relationship between the preset designated gesture and the controller to determine a second coordinate of the controller in the reference coordinate system.
It should be noted that, a certain position corresponding relationship exists between the designated gesture and the controller, and since the controller in the designated gesture is often fixed at a certain specific position in the hand, the first coordinate may be subjected to coordinate transformation processing based on the relative position relationship that can be determined in advance, that is, the coordinate of the designated gesture is converted into the coordinate of the controller.
For example, if the origin of coordinates of the currently designated gesture, i.e. the first coordinates, is (x 1, y1, z 1), the second coordinates (f (x 1), g (y 1), p (z 1)) of the controller may be calculated based on a preset transformation rule, such as x-f (x), y-g (x), z-p (x), and is not limited herein.
Thus, the position information, the displacement information of the designated gesture can be converted into the position information and the displacement information of the controller.
Wherein the second coordinate may be a three-dimensional space coordinate of the controller in the reference coordinate system.
Therefore, the gesture recognition is converted into the controller recognition in a certain sense, and the position information of the designated gesture is converted into the position information of the controller, so that the displacement information of the controller can be conveniently determined.
And step 204, acquiring the relative position between the current controller and the VR equipment.
Step 205, determining second position information of the controller according to the first position information and the relative position.
The second position information may be three-dimensional coordinate information of the controller in a world coordinate system, where the world coordinate system is an absolute coordinate system in an environment inside the vehicle.
Specifically, the first position information can be converted into the second position information, namely coordinates in a world coordinate system, through model transformation based on the first position information of the controller and the relative position between the controller and the VR device. That is, the position of the controller in the camera coordinate system is converted into the space coordinate in the world space coordinate system, wherein the second position information is also the position of the controller in the world coordinate system.
The model transformation may include stretching, rotation, displacement, and the like.
And step 206, generating a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the second position information and the posture information.
The second position information includes position information of the controller along X, Y, Z, and the attitude information includes attitude information Pitch, yaw, and Roll of the controller around X, Y, Z, where Pitch is a Pitch angle of rotation around the X axis, yaw is a Yaw angle of rotation around the Y axis, and Roll is a Roll angle of rotation around the Z axis. The positional information in the direction of the three rectangular axes X, Y, Z and the attitude information Pitch, yaw, and Roll in the direction of the three rectangular axes X, Y, Z are collectively referred to as six-degree-of-freedom information.
Optionally, the car machine may perform rendering processing on the obtained image captured by the VR device through a system with rendering capability based on the second position information and the posture information, and generate a rendered virtual reality VR picture, so that the virtual reality VR picture may include six-degree-of-freedom information of the controller.
In the embodiment of the disclosure, the car machine firstly obtains attitude information of the controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller, then analyzes the environment picture containing the designated gesture to determine a first coordinate of the designated gesture in the environment picture under a reference coordinate system, wherein the reference coordinate system takes the VR device as a coordinate origin, then performs coordinate transformation on the first coordinate based on a preset relative position relationship between the designated gesture and the controller to determine a second coordinate of the controller under the reference coordinate system, then obtains current second position information of the VR device, and a first relative position between the controller and the VR device at an adjacent previous time, determines a current relative position between the controller and the VR device according to the first position information and the second position information, then determines position change information between the controller and the VR device according to the current relative position and the first relative position, and finally generates a virtual reality picture containing six-degree-of-freedom information of the controller based on the position change information and the attitude information. Therefore, the position coordinates of the designated gesture under the camera coordinate system can be converted into the position coordinates of the controller, the coordinates of the controller are converted into second position information according to the current relative position of the controller and the VR equipment, namely the coordinates under the world coordinate system, so that the virtual reality VR picture contains the position information and the posture information of the controller, the visual experience requirement of a user with six degrees of freedom is met, and the position sensor does not need to be arranged in the controller, so that the power consumption of the controller can be reduced, and the long-term use of the controller is guaranteed.
Fig. 3 is a flowchart illustrating a six-degree-of-freedom picture generation method of a controller according to a third embodiment of the present disclosure. As shown in fig. 3, the six-degree-of-freedom picture generation method of the controller may include the steps of:
the execution subject of the third embodiment of the present disclosure is a VR device, and the VR device is described below as the execution subject.
Step 301, receiving the attitude information sent by the controller.
It should be noted that, the VR device may receive the bluetooth broadcast information sent by the controller based on the bluetooth module and analyze and acquire the posture information included in the bluetooth broadcast information.
Wherein, the attitude information is the attitude information of the controller.
Step 302, performing visual detection on a current environment to acquire an environment picture containing a designated gesture, wherein the designated gesture is an effective hand-holding gesture associated with the controller.
It should be noted that after receiving the gesture information sent by the controller, the gesture detection device may also receive the type of the controller sent by the car machine and determine the designated gesture to be detected according to the type of the controller.
The controller can communicate with the car machine and tell the VR device the type of the current controller, i.e. the form information, so that the VR device can monitor the environment picture containing the designated gesture according to the type of the controller. The designated gesture is the designated gesture corresponding to the type of the current controller, so that the calculation power of the VR device can be saved, and the real-time responsiveness can be improved.
The environment picture may be a picture of an environment in the in-vehicle device. Wherein the designated gesture is an effective hand-held gesture associated with the controller. It should be noted that each form of controller may correspond to a unique and effective hand-held gesture, that is, for any form of controller, the controller can only be used when the user operates the controller with the gesture corresponding to the form.
Step 303, sending the attitude information of the controller and the environment picture containing the designated gesture to a car machine, so that the car machine generates a virtual reality VR picture containing six-degree-of-freedom information of the controller according to the attitude information and the environment picture.
Optionally, the VR device may send the posture information of the controller to the vehicle-mounted device in a wired manner, and may also send the environment picture including the designated gesture to the vehicle-mounted device in a wireless communication manner, for example, in a millimeter wave manner, so that the vehicle-mounted device may perform rendering processing on the obtained image captured by the VR device based on the environment picture including the designated gesture and the posture information by using a system having rendering capability, and generate a rendered virtual reality VR picture, where the virtual reality VR picture may include six-degree-of-freedom information of the controller.
Furthermore, the VR equipment can also receive a VR picture which is sent by the vehicle machine and contains the six-degree-of-freedom information of the controller, and the VR picture is displayed on the display. Thereby, a VR screen containing the six degrees of freedom information of the controller can be presented to the user.
In the embodiment of the disclosure, a VR device first receives gesture information sent by a controller, and then performs visual detection on a current environment to obtain an environment picture including a designated gesture, where the designated gesture is an effective handheld gesture associated with the controller, and finally sends the gesture information of the controller and the environment picture including the designated gesture to a car machine, so that the car machine generates a virtual reality VR picture including six-degree-of-freedom information of the controller according to the gesture information and the environment picture. Therefore, the VR equipment can send the effective handheld posture associated with the controller and the posture information sent by the controller to the vehicle machine, so that the vehicle machine can perform image rendering, a virtual reality VR picture containing the six-degree-of-freedom information of the controller is generated, the six-degree-of-freedom visual experience requirement of a user is met, and the power consumption of the controller can be reduced and the long-time use of the controller is guaranteed due to the fact that a position sensor does not need to be arranged in the controller.
Fig. 4 is a schematic structural diagram of a six-degree-of-freedom picture generation apparatus of a controller according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the six-degree-of-freedom picture generation apparatus 400 of the controller may include: a first obtaining module 410, a determining module 420, and a first generating module 430.
The device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring attitude information of a controller and an environment picture containing a designated gesture, and the designated gesture is an effective handheld gesture associated with the controller;
the determining module is used for determining the current first position information of the controller according to the environment picture containing the designated gesture;
and the first generating module is used for generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the current first position information and the attitude information of the controller.
Optionally, the first obtaining module is specifically configured to:
in response to receiving the type of controller sent by the controller, sending the type of controller to a VR device;
and receiving the posture information of the controller returned by the VR equipment and the environment picture containing the specified gesture corresponding to the type of the controller.
Optionally, the determining module is specifically configured to:
analyzing the environment picture containing the designated gesture to determine a first coordinate of the designated gesture in the environment picture under a reference coordinate system, wherein the reference coordinate system takes VR equipment as a coordinate origin;
and performing coordinate transformation on the first coordinate based on a preset relative position relation between the designated gesture and the controller to determine a second coordinate of the controller under the reference coordinate system.
Optionally, the first generating module is specifically configured to:
acquiring current second position information of VR equipment and a first relative position between the controller and the VR equipment at an adjacent previous moment;
determining the current relative position of the controller and the VR equipment according to the first position information and the second position information;
determining position change information between the controller and the VR device according to the current relative position and the first relative position;
and generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the position change information and the attitude information.
Optionally, the first obtaining module is specifically configured to:
receiving attitude information and types sent by a controller;
and sending the type to a camera device in the vehicle so that the camera device collects and returns an environment picture which corresponds to the type and contains the designated gesture.
In the embodiment of the disclosure, the car machine firstly obtains attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller, then determines current first position information of the controller according to the environment picture containing the designated gesture, and then generates a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the current first position information of the controller and the attitude information. Therefore, the position information of the gesture can be converted into the position information of the controller, the generated virtual reality VR picture can contain the position information and the posture information of the controller under the condition that a displacement sensor is not used, the visual experience requirement of a user with six degrees of freedom is met, and the position sensor does not need to be arranged in the controller, so that the power consumption of the controller can be reduced, and the long-time use of the controller is guaranteed.
Fig. 5 is a schematic structural diagram of a six-degree-of-freedom picture generation apparatus of a controller according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the six-degree-of-freedom picture generating apparatus of the controller, executed by the VR device, includes a receiving module 510, a second obtaining module 520, and a second generating module 530.
The receiving module is used for receiving the attitude information sent by the controller;
the second acquisition module is used for carrying out visual detection on the current environment so as to acquire an environment picture containing a specified gesture, wherein the specified gesture is an effective handheld gesture associated with the controller;
and the second generation module is used for sending the attitude information of the controller and the environment picture containing the designated gesture to the vehicle machine so that the vehicle machine generates a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller according to the attitude information and the environment picture.
Optionally, the second obtaining module is further configured to:
receiving the type of the controller sent by the vehicle machine;
and determining the designated gesture to be detected according to the type of the controller.
Optionally, the second generating module is further configured to:
and receiving a VR picture which is sent by the vehicle machine and contains the six-degree-of-freedom information of the controller, and displaying the VR picture on a display.
In the embodiment of the disclosure, VR equipment firstly receives gesture information sent by a controller, and then visually detects a current environment to obtain an environment picture including an assigned gesture, where the assigned gesture is an effective handheld gesture associated with the controller, and finally sends the gesture information of the controller and the environment picture including the assigned gesture to a car machine, so that the car machine generates a virtual reality VR picture including six-degree-of-freedom information of the controller according to the gesture information and the environment picture.
In order to implement the foregoing embodiments, the present disclosure also provides a computer device, including: the present disclosure provides a six-degree-of-freedom image generation method for a controller, which is provided by the foregoing embodiments of the present disclosure, when the processor executes a program.
In order to achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, implements the six-degree-of-freedom picture generation method of the controller as proposed by the foregoing embodiments of the present disclosure.
In order to achieve the above embodiments, the present disclosure further proposes a computer program product, which when executed by an instruction processor in the computer program product, executes the six-degree-of-freedom picture generation method of the controller proposed by the foregoing embodiments of the present disclosure.
FIG. 6 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure. The computer device 12 shown in fig. 6 is only one example and should not bring any limitations to the functionality or scope of use of the embodiments of the present disclosure.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by running a program stored in the system memory 28.
In the description of the present specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. While embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (12)

1. A six-degree-of-freedom picture generation method of a controller is characterized by comprising the following steps:
acquiring attitude information of a controller and an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller;
determining current first position information of the controller according to the environment picture containing the designated gesture;
and generating a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the current first position information of the controller and the attitude information.
2. The method of claim 1, wherein the obtaining of the pose information of the controller and the environment picture containing the designated gesture comprises:
in response to receiving the type of controller sent by the controller, sending the type of controller to a VR device;
and receiving the posture information of the controller returned by the VR equipment and the environment picture containing the specified gesture corresponding to the type of the controller.
3. The method according to claim 1, wherein the determining the current first position information of the controller according to the environment picture containing the designated gesture comprises:
analyzing the environment picture containing the designated gesture to determine a first coordinate of the designated gesture in the environment picture under a reference coordinate system, wherein the reference coordinate system takes VR equipment as a coordinate origin;
and performing coordinate transformation on the first coordinate based on a preset relative position relation between the designated gesture and the controller to determine a second coordinate of the controller under the reference coordinate system.
4. The method of claim 1, wherein generating a Virtual Reality (VR) screen containing six degrees of freedom information for the controller based on the current first position information of the controller and the pose information comprises:
acquiring the relative position between a current controller and VR equipment;
determining second position information of the controller according to the first position information and the relative position;
and generating a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller based on the second position information and the attitude information.
5. The method according to any one of claims 1-4, wherein the acquiring of the attitude information of the controller and the environment picture containing the designated gesture comprises:
receiving attitude information and types sent by a controller;
and sending the type to a camera device in the vehicle so that the camera device collects and returns an environment picture which corresponds to the type and contains the designated gesture.
6. A six-degree-of-freedom picture generation method of a controller, executed by a VR device, includes:
receiving attitude information sent by a controller;
performing visual detection on a current environment to acquire an environment picture containing a designated gesture, wherein the designated gesture is an effective handheld gesture associated with the controller;
and sending the attitude information of the controller and the environment picture containing the designated gesture to a vehicle machine, so that the vehicle machine generates a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller according to the attitude information and the environment picture.
7. The method according to claim 6, wherein before the visually detecting the current environment to obtain the environment picture containing the designated gesture, the method further comprises:
receiving the type of the controller sent by the vehicle machine;
and determining the designated gesture to be detected according to the type of the controller.
8. The method according to claim 6 or 7, wherein after sending the attitude information of the controller and the environment picture containing the designated gesture to the car machine to enable the car machine to generate a Virtual Reality (VR) picture containing six-degree-of-freedom information of the controller according to the attitude information and the environment picture, the method further comprises:
and receiving a VR picture which is sent by the vehicle machine and contains the six-degree-of-freedom information of the controller, and displaying the VR picture on a display.
9. A six-degree-of-freedom screen generating device for a controller, comprising:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring attitude information of a controller and an environment picture containing a designated gesture, and the designated gesture is an effective handheld gesture associated with the controller;
the determining module is used for determining the current first position information of the controller according to the environment picture containing the designated gesture;
and the first generating module is used for generating a Virtual Reality (VR) picture containing the six-degree-of-freedom information of the controller based on the current first position information and the attitude information of the controller.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the six degree of freedom picture generation method of the controller of any one of claims 1 to 8 when executing the program.
11. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the six-degree-of-freedom picture generation method of the controller according to any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the six degree-of-freedom picture generation method of the controller of any one of claims 1 to 8.
CN202210515348.9A 2022-05-12 2022-05-12 Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium Pending CN115469739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210515348.9A CN115469739A (en) 2022-05-12 2022-05-12 Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210515348.9A CN115469739A (en) 2022-05-12 2022-05-12 Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium

Publications (1)

Publication Number Publication Date
CN115469739A true CN115469739A (en) 2022-12-13

Family

ID=84364764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210515348.9A Pending CN115469739A (en) 2022-05-12 2022-05-12 Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium

Country Status (1)

Country Link
CN (1) CN115469739A (en)

Similar Documents

Publication Publication Date Title
CN110458122B (en) Sight line calibration method, display device playing method and sight line calibration system
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
EP3859495B1 (en) Systems and methods for tracking motion and gesture of heads and eyes
US10554886B2 (en) Power management for optical position tracking devices
US7952561B2 (en) Method and apparatus for controlling application using motion of image pickup unit
JP5656514B2 (en) Information processing apparatus and method
EP3644826A1 (en) A wearable eye tracking system with slippage detection and correction
JP2006522397A (en) Multi-view display
US20030184602A1 (en) Information processing method and apparatus
US20230325009A1 (en) Methods, devices, apparatuses, and storage media for mapping mouse models for computer mouses
CN111736689B (en) Virtual reality device, data processing method, and computer-readable storage medium
CN107145706B (en) Evaluation method and device for performance parameters of virtual reality VR equipment fusion algorithm
JP2005147894A (en) Measuring method and measuring instrument
US20190042001A1 (en) Three-Dimensional Object Tracking System
CN115469739A (en) Six-degree-of-freedom picture generation method, device and equipment of controller and storage medium
US10130334B2 (en) Presenting a graphical representation of an ultrasound scanning volume
US20230133168A1 (en) Method for identifying human postures and gestures for interaction purposes and portable hand-held device
CN117202142A (en) Method, device, equipment and storage medium for generating six-degree-of-freedom picture of controller
CN115457073A (en) Method, device and equipment for generating six-degree-of-freedom picture of controller and storage medium
CN115471640A (en) Method, device and equipment for generating six-degree-of-freedom picture of controller and storage medium
CN117726960B (en) Interactive device identification method and device, electronic device and storage medium
US20230260210A1 (en) Computer, method, and computer-readable medium
CN117095045A (en) Positioning method, device and equipment of in-vehicle controller and storage medium
CN117289792A (en) AR interaction method and device and electronic equipment
CN115474177A (en) Communication method and device of vehicle-mounted VR equipment, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination