WO2021190421A1 - Virtual reality-based controller light ball tracking method on and virtual reality device - Google Patents

Virtual reality-based controller light ball tracking method on and virtual reality device Download PDF

Info

Publication number
WO2021190421A1
WO2021190421A1 PCT/CN2021/081910 CN2021081910W WO2021190421A1 WO 2021190421 A1 WO2021190421 A1 WO 2021190421A1 CN 2021081910 W CN2021081910 W CN 2021081910W WO 2021190421 A1 WO2021190421 A1 WO 2021190421A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
information
photosphere
posture information
controller
Prior art date
Application number
PCT/CN2021/081910
Other languages
French (fr)
Chinese (zh)
Inventor
王冉冉
杨宇
刘帅
赵玉峰
周鸣岐
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010230449.2A external-priority patent/CN113516681A/en
Priority claimed from CN202010226710.1A external-priority patent/CN111427452B/en
Priority claimed from CN202010246509.XA external-priority patent/CN113467625A/en
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2021190421A1 publication Critical patent/WO2021190421A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This application relates to the field of simulation technology, and in particular to a method for tracking a controller light ball and a virtual reality device based on virtual reality.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • XR extended reality
  • helmets and controllers By tracking and manipulating objects in the virtual world on the controller, users can interact with the surrounding environment by controlling the movement of the controller.
  • the controller can also be called a handle; the controller can emit a ball of light; and then it is necessary to chase the position of the ball of light to complete the positioning of the target to complete the operation of virtual reality. How to locate and track the controller is a technical problem that needs to be solved in the industry.
  • the present application provides a controller photosphere tracking method and virtual reality equipment based on virtual reality to solve the problem of positioning error or delay in the existing photosphere tracking technology.
  • the present application provides a method for tracking a controller photosphere based on virtual reality.
  • the method includes:
  • the current display position of the virtual target corresponding to the controller is generated and output.
  • the present application provides a virtual reality-based controller photosphere tracking device, which includes:
  • the first processing unit is configured to determine the second posture information of the next location point adjacent to the previous location point according to the first posture information of the previous location point of the photosphere;
  • a second processing unit configured to determine second location information of the next location point according to the first location information of the previous location point, the first posture information, and the second posture information;
  • the third processing unit is configured to generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
  • this application provides an electronic device, including:
  • At least one processor At least one processor
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of the first aspects .
  • the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method described in any one of the first aspect.
  • this application provides a virtual reality device, the virtual reality device including:
  • a display screen which is used to display images
  • a processor the processor is configured to:
  • the location of the controller is determined, and then the screen is displayed.
  • the second posture information of the next location point adjacent to the previous location point is determined according to the first posture information of the previous location point of the photosphere;
  • the first position information, the first posture information and the second posture information of the previous position point are determined, and the second position information of the next position point is determined; according to the second position information, the current display of the virtual target corresponding to the controller is generated and output Location.
  • the ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
  • FIG. 1 is a schematic flowchart of a method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application;
  • Figure 1a is a schematic diagram of a controller equipped with a light ball provided by an embodiment of the application
  • FIG. 1b is a schematic diagram of a motion trajectory of a photosphere according to an embodiment of the application
  • FIG. 2 is a schematic flowchart of another method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application;
  • Figure 2a is a schematic diagram of a human head doing up and down rotation around the neck according to an embodiment of the application;
  • FIG. 2b is a schematic diagram of a human eye doing left and right rotation around the occipital bone of the head according to an embodiment of the application;
  • FIG. 2c is a schematic diagram of a human arm making a rotational movement centered on the elbow according to an embodiment of the application;
  • FIG. 3 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of yet another virtual reality-based controller photosphere tracking device provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application.
  • Figure 6 is a schematic diagram of an application scenario provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of a controller provided by an embodiment of the application.
  • FIG. 8 is a schematic flowchart of a tracking method for a controller provided by an embodiment of the application.
  • FIG. 10 is a schematic flowchart of still another method for tracking a controller according to an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of a tracking device for a controller provided by an embodiment of the application.
  • FIG. 13 is a schematic diagram of the hardware structure of the tracking device of the controller provided by an embodiment of the application.
  • FIG. 15 is a scene diagram of virtual reality interaction provided by related technologies
  • FIG. 16 is a control logic diagram of a virtual reality control device provided by an embodiment of the application.
  • FIG. 17 is a schematic diagram of touch control provided by an example of this application.
  • FIG. 18 is a schematic diagram of touch control provided by another example of this application.
  • FIG. 19 is a control logic diagram of a virtual reality control device provided by another embodiment of the application.
  • FIG. 21 is a schematic diagram of the touch main interface in the first control mode according to an embodiment of the application.
  • FIG. 22 is a schematic diagram of the touch principle in the first control mode provided by an embodiment of the application.
  • FIG. 23 is a schematic diagram of the touch principle in the second control mode provided by an embodiment of the application.
  • FIG. 24 is a schematic diagram of the infrared touch structure provided by an embodiment of the application.
  • 25 is a schematic diagram of the principle of infrared touch provided by an embodiment of the application.
  • FIG. 26 is a schematic diagram of the control logic of the helmet provided by an embodiment of the application.
  • FIG. 28 is a flowchart of a virtual reality interaction method provided by an embodiment of the application.
  • FIG. 29 is a flowchart of a virtual reality interaction method provided by another embodiment of this application.
  • Fig. 31 is a block diagram of a helmet provided by an embodiment of the application.
  • Light sphere A luminous sphere used to track and locate targets in virtual reality technology.
  • the luminous color can be a high-saturation visible light color or infrared light, which is usually equipped on a controller.
  • Posture The posture and rotation of an object in three-dimensional space, expressed by rotation matrix, Euler angles, and four elements.
  • Inertial sensor A sensor mainly used to detect and measure acceleration, tilt, shock, vibration, rotation and multi-degree-of-freedom (DoF) motion. It is an important component for solving navigation, orientation and motion carrier control. Usually include “gyro”, “accelerometer” and “magnetometer”, as follows:
  • the gyroscope can measure the angular velocity, and the attitude can be obtained by integrating the angular velocity, but errors will occur during the integration process. As time increases, the errors will accumulate and eventually lead to obvious attitude deviations;
  • the accelerometer can measure the acceleration of the device, which contains gravity information. Therefore, the accelerometer data can be used to correct the attitude deviation related to the direction of gravity, that is, the accelerometer can be used to correct the angle deviation of roll and pitch;
  • the yaw angle (yaw) can be calculated from the magnetometer, and the attitude can be corrected accordingly.
  • the controller can emit visible light; the captured image can be obtained according to the visible light emitted by the controller, and then image processing is performed on the captured image to obtain the position of the photosphere, and then the target can be located.
  • the position of the photosphere is determined completely based on the image processing method.
  • the image processing method is easily interfered by environmental factors and the factors of the image acquisition unit itself, which may cause the position of the photosphere to be inaccurate. , Resulting in target positioning errors or positioning delays.
  • the image acquisition unit may cause the image acquisition unit to be unable to accurately capture the position of the photosphere, resulting in inaccurate positioning of the photosphere.
  • the image acquisition device needs to distinguish between the red light emitted by the photosphere and the red background color, which results in slower positioning of the photosphere, causing stalls and delays.
  • the virtual reality-based controller photosphere tracking method provided in this application aims to solve the above technical problems of related technologies.
  • Fig. 1 is a schematic flowchart of a method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application. As shown in Fig. 1, the method includes:
  • Step 101 According to the first posture information of the previous location point of the photosphere, determine the second posture information of the next location point adjacent to the previous location point.
  • the execution subject of this embodiment is a terminal device or a server or a controller set on the terminal device, or other devices or devices that can execute this embodiment, and this embodiment takes the execution subject as
  • the application software set on the terminal device is taken as an example for description.
  • the terminal device here may be a VR device.
  • luminous light balls are usually used to locate and track moving targets in a spatial range.
  • a controller with luminous light spheres held or worn by the user can be used to realize the movement or action of the user.
  • the position of the light ball is the user's position or the position of the user's body part wearing the light ball
  • the motion trajectory of the light ball is the user's motion trajectory or the user's body part wearing the light ball.
  • Figure 1a is a schematic diagram of a controller equipped with light balls provided by this embodiment. As shown in Figure 1a, the controller can be equipped with light balls of different colors, and light balls of different colors can represent different users or different users. Body parts.
  • the user himself remains in place, and part of the user's body (for example, head, eyes, arms, etc.) wears a controller with a light ball and performs different actions as an example for description.
  • part of the user's body for example, head, eyes, arms, etc.
  • wears a controller with a light ball When it is detected that the light ball changes from the previous position point to the next position point, it means that the user's body part wearing the light ball has also moved from the previous position point to the next position point.
  • the user can wear a controller with a light ball on his head. When the user's head rotates up and down with the neck as the center, the light ball also rotates in space with the user's head.
  • Detecting the change in the position of the photosphere can indirectly detect the position change of the user's head when rotating; or, the user can wear a controller with a photosphere in the eyes, and when the user's eyes surround the occipital bone of the user's head When rotating left and right, the light ball also makes corresponding rotation movement in space with the user's eyes, and detecting the position change of the light ball can indirectly detect the position change of the user's eye when making a rotating movement; or, The user can wear a controller with a light ball on his arm. When the user's arm rotates around the elbow, the light ball will also rotate with the user's arm in the space, and the position change of the light ball can be detected indirectly. Changes in the position of the arm during a rotational movement.
  • both the position information and the posture information of the light ball will change.
  • the first posture information of the light ball at the previous position is used to predict the second posture information of the light ball at the next position, and there is no need to use image recognition technology to check the light ball’s position.
  • the position information of the latter position point is identified.
  • the "previous position point” and “next position point” mentioned in this embodiment are two adjacent position points, which can be taken from any two adjacent position points on the trajectory of the photosphere, and are not limited to
  • the start position point and the end position point of the trajectory of the light ball may be any two position points on the trajectory of the light ball at every predetermined interval of time dt.
  • the preset time dt can be set according to the requirements of the tracking accuracy of the photosphere position, for example, it can be 10ms, 20ms, and so on.
  • Fig. 1b is a schematic diagram of the trajectory of a photosphere provided by this embodiment.
  • point A and point B are two adjacent points on the trajectory of the photosphere, and point A is the previous one.
  • the position point, point B is the latter position point, and the posture information of the light ball at point A is Q0, then the posture information Qt of the light ball at point B can be calculated according to the following equations I and II:
  • is the rotational angular velocity
  • dt is the preset time interval
  • Step 102 Determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point.
  • the displacement ⁇ l of the photosphere from the previous position point to the next position point of the two adjacent position points is calculated and determined, and then according to the previous position point
  • the first position information and the displacement ⁇ l of one position point are calculated and the second position information of the photosphere at the latter position point is determined.
  • the purpose of this embodiment is to use the first position information and first posture information of the photosphere at the previous position point of the two adjacent position points to predict the second position information of the photosphere at the latter position point, without the need to correct
  • the second position information of the photosphere at the latter position is scanned for image recognition, which can effectively overcome the operation delay and stall failure caused by the use of image recognition technology to identify the second position information of the positioning photosphere at the latter position. And other issues.
  • Step 103 Generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
  • the current display position of the virtual target corresponding to the controller is generated and output, wherein the current display position of the virtual target corresponding to the controller can be output on the VR display.
  • the current display position of the virtual target may also output the current display position of the virtual target corresponding to the controller in the virtual reality space.
  • the method of generating and outputting the current display position of the virtual target corresponding to the controller in the VR display and/or virtual reality space may be a conventional method in the art, which will not be repeated in this embodiment.
  • the second posture information of the next position point adjacent to the previous position point is determined; according to the first position information of the previous position point, The first posture information and the second posture information determine the second position information of the next position point; according to the second position information, the current display position of the virtual target corresponding to the controller is generated and output.
  • the ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
  • FIG. 2 is a schematic flowchart of another method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application. As shown in FIG. 2, the method includes:
  • Step 201 Acquire first position information and first posture information of the previous position point of the photosphere.
  • Camera and image recognition technology can be used to obtain the first position information of the previous position of the photosphere, specifically: the camera is used to obtain the image data when the photosphere is at the previous position, and the image recognition technology is used to compare the acquired image
  • the data is recognized and processed, the position of the center of the photosphere is obtained, and the position of the center of the photosphere is converted into three-dimensional coordinates to obtain the first position information of the photosphere.
  • the image recognition technology is a conventional technology in the field, and will not be repeated in this embodiment.
  • the inertial sensor IMU can be used to collect the IMU data at the previous position of the photosphere, and the collected IMU data can be processed to obtain the first posture information of the photosphere.
  • the collected IMU data can be processed by using a posture calculation algorithm to obtain the first posture information of the photosphere.
  • the first posture information of the photosphere includes at least the rotational angular velocity, acceleration, or yaw angle.
  • the inertial sensor IMU may be used to collect the gravitational acceleration when the photosphere is at the previous position point, and the rotation angular velocity can be obtained according to the gravitational acceleration.
  • acquiring the first position information of the previous position of the photosphere includes: acquiring an image, where the image is the image collected by the collection unit when the photosphere is located at the previous position; and determining that the photosphere is in the image according to the image In the location to get the first location information.
  • the acquisition unit may be a camera.
  • multiple cameras can be set up to collect images of the photosphere at the same time, and then the spatial triangulation algorithm is used to determine the position of the photosphere at the previous position The first location information.
  • the position and posture of the camera need to be calibrated in advance by using markers of known position and posture.
  • obtaining the first posture information of the previous position of the photosphere includes: using a gyroscope to obtain the angular velocity of the photosphere at the previous position; using an accelerometer to obtain the acceleration of the photosphere at the previous position; using magnetism The meter obtains the yaw angle of the light ball at the previous position point.
  • the above-mentioned methods for acquiring the first posture information by using inertial sensors may all be conventional methods in the field, and details are not described herein again in this embodiment.
  • this embodiment further includes an operation of storing the acquired first location information. Store the first location information for use in subsequent steps.
  • Step 202 Obtain the posture data detected by the inertial measurement unit; determine the second posture information according to the first posture information, posture data, and preset movement time, where the movement time is the movement of the photosphere from the previous position to the next The time required for the location point.
  • the inertial measurement unit includes an inertial sensor
  • the attitude data includes any one of the following: rotation angular velocity, gravitational acceleration, yaw angle, and pitch angle.
  • This embodiment uses the attitude data as the rotation angular velocity for description. .
  • determining the second posture information includes: determining the movement angle according to the posture data and the movement time; and determining the second posture information according to the movement angle and the first posture information.
  • the movement time refers to the time required for the light ball to move from the previous position to the next position.
  • the length of the movement time can be set according to actual needs. For example, it can be set according to the actual demand for the accuracy of the tracking and positioning of the light ball. Set the movement time. When the tracking and positioning accuracy of the light ball is required, a shorter movement time can be set. On the contrary, when the tracking and positioning accuracy of the light ball is low, a longer movement time can be set. In general, the moving time can be set to 10ms-20ms.
  • the moving angle refers to the angle that the light ball moves in a rotating motion within the moving time.
  • Step 203 Determine the first predicted position when the photosphere is located at the previous position point according to the first posture information, where the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous position point.
  • determining the first predicted position when the photosphere is located at the previous position point according to the first posture information includes: determining the first predicted position according to the first posture information and a preset bone and joint model , Among them, the bone joint model is used to indicate the movement relationship of the human joints.
  • the bone joint model is used to indicate the changes in the position or movement trajectory of the human joints over time.
  • the bone joint model can also be used to indicate the change in the position or movement trajectory of the photosphere over time.
  • the bone joint model includes a preset moving radius; determining the first predicted position according to the first posture information and the preset bone joint model includes: determining according to the first posture information, the moving radius, and the preset first moving time The first predicted position, where the first movement time is the time required for the photosphere to move from the initial position point to the previous position point.
  • the bone joint model in this embodiment is adapted to the position of the human body joint, and different human joints correspond to different bone joint models.
  • the bone and joint models in this embodiment include a head model, an eye model, and an arm model.
  • the human head, eyes, and arms in a two-dimensional plane xoy coordinate system are taken as examples to illustrate the bone joint model.
  • Figure 2a is a schematic diagram of a human head doing up and down rotation with the neck as the center provided by this embodiment.
  • the O 1 point represents the position of the human neck
  • the L point, M point and N point represent The position of the human head
  • the human head rotates from point L to point N through point M at a rotational angular velocity ⁇ 1
  • point L is the starting position of the rotation
  • point M and point N are two adjacent positions respectively
  • the previous position point is the previous position point and the next position point
  • the distance r 1 between the human head and the human neck is the radius of the rotational motion trajectory.
  • ⁇ 1 is the first posture information of the human head at point M
  • r 1 is the moving radius
  • dt 1 is the preset The first move time.
  • Figure 2b is a schematic diagram of a human eye doing left and right rotation around the occipital bone of the head provided by this embodiment.
  • the O 2 point represents the position of the occipital bone of the head
  • point F, point G and point H Represents the position of the human eye; the human eye rotates from point F through point G to point H at an angular velocity of rotation ⁇ 2
  • point F is the starting position of the rotational movement
  • point G and H are two adjacent positions respectively
  • the previous location point and the next location point in, the distance r 2 between the eyes of the human body and the occipital bone of the head is the radius of the rotational motion trajectory.
  • Figure 2c is a schematic diagram of a human arm performing a rotational movement centered on the elbow provided by this embodiment.
  • point O 3 represents the position of the elbow
  • point C, D and E represent the position of the human arm. Is the position; the human arm rotates from point C through point D to point E at an angular velocity of rotation ⁇ 3 , point C is the starting position of the rotation movement, point D and point E are the previous positions of the two adjacent positions respectively Point and the next position point; the distance r 3 between the human arm and the elbow is the radius of the rotational motion trajectory.
  • ⁇ 3 is the first posture information of the human arm at point D
  • r 3 is the moving radius
  • dt 3 is the preset first posture information.
  • the moving time, ⁇ is the angle between the line between the starting position point C and the elbow position O 3 and the vertical direction.
  • equations (1)-equation (3), equation (4)-equation (5) and equation (7)-equation (9) are only for the human head, eyes and arms in the two-dimensional plane xoy coordinate system.
  • the bone and joint model is illustrated as an example.
  • the above method in this embodiment can also be used to determine the bone and joint models of other parts of the human body, such as the wrist model of the human wrist, which will not be repeated in this embodiment.
  • the position of the human joint in the three-dimensional xoyz coordinate system can be disassembled into the position in the two-dimensional plane xoy coordinate system, xoz coordinate system, and yoz coordinate system.
  • the above method is used to determine the bone joint models of the human joints in the above three two-dimensional planes, and then the three bone joint models are combined to obtain the bone joint models of the human joints in the three-dimensional xoyz coordinate system.
  • formula (10) is used to comprehensively express the bone joint model of the human joint:
  • p is the position information of the human joints
  • q is the posture information of the human joints at a certain position
  • q -1 is the inverse of q quaternion form
  • ln is the moving radius of the human joints.
  • Step 204 Determine, according to the second posture information, a second predicted position when the light ball is located at the next position point, where the second predicted position represents the position of the light ball relative to the initial position point when the light ball is located at the next position point.
  • determining the second predicted position when the photosphere is at the next position point according to the second posture information includes: determining the second predicted position according to the second posture information and the bone and joint model.
  • Determining the second predicted position according to the second posture information and the bone joint model includes: determining the second predicted position according to the second posture information, the moving radius, and the preset second moving time, where the second moving time is a photosphere The time required to move from the initial point to the next point.
  • the bone joint model does not change with the movement of the human joint, that is to say, when determining the first prediction model and the second prediction model of the photosphere at two adjacent positions
  • the bone and joint model used is the same.
  • step 204 are similar to or the same as the method and principle of step 203, please refer to the related record of step 203, which will not be repeated here.
  • Step 205 Determine the movement displacement of the light ball according to the second predicted position and the first predicted position, where the movement displacement represents the displacement of the light ball from the previous position point to the next position point.
  • the distance between the second predicted position and the first predicted position is calculated in the spatial coordinate system, that is, the light sphere moves from the previous position to the The displacement of the latter position point.
  • Fig. 2d is a schematic diagram of the light ball moving from point J to point K provided by this embodiment. As shown in Fig. 2d, point J and point K are the previous one and For the latter position point, use the method of this embodiment to calculate the displacement of the photosphere from point J to point K:
  • the first predicted position p J and the second predicted position p K when the photosphere is at point J and point K are determined by formula (10) as:
  • Step 206 Determine second position information according to the movement displacement and the first position information; according to the second position information, generate and output the current display position of the virtual target corresponding to the controller.
  • the first position information is the real position information of the light ball at the previous position point.
  • the first position information of the light ball at the previous position point and the movement displacement are comprehensively superimposed to obtain the light The second position information of the ball in the latter position.
  • the method before generating and outputting the current display position of the virtual target corresponding to the controller according to the second position information, the method further includes: smoothing the second position information according to the pre-stored position information of the historical position point of the photosphere After processing, the smoothed second position information is obtained.
  • Smoothing the second position information can reduce the noise or distortion of the image.
  • the method of smoothing the second position information in this embodiment may be a conventional method in the art, such as mean filtering and median filtering. Method, Gaussian filtering method or bilateral filtering method, etc.
  • the method of the present application further includes: generating and outputting the current pose information of the photosphere according to the second position information and the second posture information.
  • the pose information includes position information and pose information
  • the current pose information of the photosphere is generated and output, so that the pose information can be cited when the photosphere is continuously tracked and positioned.
  • the first position information and the first posture information of the previous position of the photosphere are acquired; the posture data detected by the inertial measurement unit is obtained; according to the first posture information, the posture data, and the preset moving time , Determine the second posture information, where the movement time is the time required for the light ball to move from the previous position point to the next position point; according to the first posture information, determine the first predicted position when the light ball is at the previous position point, Among them, the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous location point; according to the second posture information, the second predicted position when the photosphere is located at the next location point is determined, where the second prediction The position characterizes the position of the light ball relative to the initial position point when it is located at the next position point; according to the second predicted position and the first predicted position, the movement displacement of the light ball is determined, where the movement displacement characterizes the movement of the light ball from the previous position point to The displacement of the latter position point; the second position information is determined
  • the ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and improves the user experience; further, the first predicted position of the light ball at the previous location point and the second predicted location at the next location point are used to determine The movement displacement of the light ball from the previous position point to the next position point, and then according to the actually measured first position information of the light ball at the previous position point and the movement displacement, determine the second position of the light ball at the next position point.
  • the location information can further improve the accuracy and precision of the photosphere tracking and positioning.
  • Fig. 3 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application. As shown in Fig. 3, the device includes:
  • the second processing unit 2 is configured to determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point;
  • the third processing unit 3 is configured to generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
  • the second posture information of the next position point adjacent to the previous position point is determined; according to the first position information of the previous position point, The first posture information and the second posture information determine the second position information of the next position point; according to the second position information, the current display position of the virtual target corresponding to the controller is generated and output.
  • the ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
  • Fig. 4 is a schematic structural diagram of another virtual reality-based controller photosphere tracking device provided by an embodiment of the application. On the basis of Fig. 3, as shown in Fig. 4:
  • the second processing unit 2 includes:
  • the first processing subunit 21 is configured to determine, according to the first posture information, a first predicted position when the light ball is located at the previous position point, where the first predicted position represents that when the light ball is located at the previous position point, relative to the initial position Point location
  • the second processing subunit 22 is configured to determine a second predicted position when the light ball is located at the next position point according to the second posture information, where the second predicted position represents that when the light ball is located at the next position point, relative to the initial position Point location
  • the third processing subunit 23 is configured to determine the movement displacement of the photosphere according to the second predicted position and the first predicted position, where the movement displacement represents the displacement of the photosphere from the previous position point to the next position point;
  • the fourth processing subunit 24 is configured to determine the second position information according to the movement displacement and the first position information.
  • the first processing subunit 21 includes:
  • the first processing module 211 is configured to determine a first predicted position according to the first posture information and a preset bone joint model, where the bone joint model is used to indicate the movement relationship of the human joints;
  • the second processing subunit 22 includes:
  • the second processing module 221 is configured to determine the second predicted position according to the second posture information and the bone joint model.
  • the bone joint model includes a preset moving radius
  • the first processing module 211 includes:
  • the first processing sub-module 2111 is used to determine the first predicted position according to the first posture information, the movement radius, and the preset first movement time, where the first movement time is the movement of the photosphere from the initial position point to the previous position Point required time;
  • the second processing module 221 includes:
  • the fifth processing subunit 11 is used to obtain the posture data detected by the inertial measurement unit
  • the sixth processing subunit 12 is used to determine the second posture information according to the first posture information, posture data, and preset movement time, where the movement time is required for the photosphere to move from the previous position point to the next position point time.
  • the sixth processing subunit 12 includes:
  • the third processing module 121 is configured to determine the movement angle according to the posture data and the movement time;
  • the fourth processing module 122 is configured to determine the second posture information according to the movement angle and the first posture information.
  • the attitude data is any one of the following: rotation angular velocity, gravitational acceleration, yaw angle, and pitch angle.
  • the device also includes an acquiring unit 4, which is used for before the first processing unit 1 determines the second posture information of the next location point adjacent to the previous location point according to the first posture information of the previous location point of the photosphere, Acquiring the first position information and the first posture information of the previous position point of the photosphere;
  • the obtaining unit 4 includes:
  • the acquiring subunit 41 is used to acquire an image, where the image is an image acquired by the acquisition unit when the photosphere is located at a previous position;
  • the seventh processing subunit 42 is used to determine the position of the light ball in the image according to the image to obtain the first position information.
  • the device also includes:
  • the fourth processing unit 5 is configured to perform a correction based on the pre-stored position information of the historical position of the photosphere before the third processing unit 3 generates and outputs the current display position of the virtual target corresponding to the controller according to the second position information. Smoothing is performed on the second position information to obtain smoothed second position information.
  • the device also includes:
  • the fifth processing unit 6 is configured to generate and output the current pose information of the photosphere according to the second position information and the second posture information.
  • the first position information and the first posture information of the previous position of the photosphere are acquired; the posture data detected by the inertial measurement unit is obtained; according to the first posture information, the posture data, and the preset moving time , Determine the second posture information, where the movement time is the time required for the light ball to move from the previous position point to the next position point; according to the first posture information, determine the first predicted position when the light ball is at the previous position point, Among them, the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous location point; according to the second posture information, the second predicted position when the photosphere is located at the next location point is determined, where the second prediction The position characterizes the position of the light ball relative to the initial position point when it is located at the next position point; according to the second predicted position and the first predicted position, the movement displacement of the light ball is determined, where the movement displacement characterizes the movement of the light ball from the previous position point to The displacement of the latter position point; the second position information is determined
  • the ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and improves the user experience; further, the first predicted position of the light ball at the previous location point and the second predicted location at the next location point are used to determine The movement displacement of the light ball from the previous position point to the next position point, and then according to the actually measured first position information of the light ball at the previous position point and the movement displacement, determine the second position of the light ball at the next position point.
  • the location information can further improve the accuracy and precision of the photosphere tracking and positioning.
  • the present application also provides an electronic device and a readable storage medium.
  • FIG. 5 it is a block diagram of an electronic device based on a virtual reality-based controller photosphere tracking method according to an embodiment of the present application.
  • Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the application described and/or required herein.
  • the electronic device includes: one or more processors 501, a memory 502, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
  • the various components are connected to each other using different buses, and can be installed on a common motherboard or installed in other ways as needed.
  • the processor may process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to an interface).
  • an external input/output device such as a display device coupled to an interface.
  • multiple processors and/or multiple buses can be used with multiple memories and multiple memories.
  • multiple electronic devices can be connected, and each device provides part of the necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system).
  • a processor 501 is taken as an example.
  • the memory 502 is a non-transitory computer-readable storage medium provided by this application.
  • the memory stores instructions that can be executed by at least one processor, so that the at least one processor executes the virtual reality-based controller photosphere tracking method provided in the present application.
  • the non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to make the computer execute the virtual reality-based controller photosphere tracking method provided by the present application.
  • the memory 502 can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the program corresponding to the virtual reality-based controller photosphere tracking method in the embodiment of the present application Instructions/modules (for example, the acquisition unit 1, the first processing unit 2, and the second processing unit 3 shown in FIG. 3).
  • the processor 501 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 502, that is, realizing the virtual reality-based controller photosphere tracking method in the above method embodiment .
  • the memory 502 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; Created data, etc.
  • the memory 502 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 502 may optionally include memories remotely provided with respect to the processor 501, and these remote memories may be connected to an electronic device based on virtual reality-based photosphere tracking via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device based on the virtual reality-based photosphere tracking method may further include: an input device 503 and an output device 504.
  • the processor 501, the memory 502, the input device 503, and the output device 504 may be connected by a bus or in other ways. In FIG. 5, the connection by a bus is taken as an example.
  • the input device 503 can receive input digital or character information, and generate key signal input related to the user settings and function control of the electronic device based on virtual reality photosphere tracking, such as touch screen, keypad, mouse, track pad, touch pad , Pointing stick, one or more mouse buttons, trackball, joystick and other input devices.
  • the output device 504 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, a vibration motor), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
  • Various implementations of the systems and techniques described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, the programmable processor It can be a dedicated or general-purpose programmable processor that can receive data and instructions from the storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. An output device.
  • the systems and techniques described here can be implemented on a computer that has: a display device for displaying information to the user (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) ); and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user can provide input to the computer.
  • a display device for displaying information to the user
  • LCD liquid crystal display
  • keyboard and a pointing device for example, a mouse or a trackball
  • Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, voice input, or tactile input) to receive input from the user.
  • the systems and technologies described herein can be implemented in a computing system that includes back-end components (for example, as a data server), or a computing system that includes middleware components (for example, an application server), or a computing system that includes front-end components (for example, A user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the system and technology described herein), or includes such back-end components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
  • the computer system can include clients and servers.
  • the client and server are generally far away from each other and usually interact through a communication network.
  • the relationship between the client and the server is generated by computer programs that run on the corresponding computers and have a client-server relationship with each other.
  • the controller carries an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the IMU can measure the angular velocity and acceleration of the controller in a three-dimensional space, and use this to calculate the controller’s attitude to achieve three degrees of freedom ( Three Degrees of Freedom, 3DOF for short) tracking.
  • the position of the controller cannot be measured, and the degree of freedom of movement along the three rectangular coordinate axes X, Y, and Z cannot be obtained. Therefore, it is difficult to track the position change of the controller when the user manipulates the controller for translation. , Resulting in poor interaction between users and the surrounding environment, affecting user experience.
  • a multi-point light-emitting unit is set on the controller, and multiple light points of the controller are tracked by a visual method, so as to track the position and posture of the controller, thereby achieving 6DOF tracking of the controller.
  • This embodiment provides a tracking method for a controller.
  • the method can be applied to the application scenario diagram shown in FIG. 6.
  • the application scenario provided by this embodiment includes the tracking processor 101 of the controller and the control The device 102 and the image acquisition device 103.
  • Fig. 7 is a schematic diagram of a controller provided by this embodiment.
  • the controller carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes multiple light points.
  • the restriction can be set according to actual application scenarios.
  • a possible use form of the controller is a handle, and the user can hold the handle with his hand and control the movement of the handle.
  • the tracking processor 101 of the controller can obtain the converted sequence images of the multi-point light-emitting unit during the movement of the controller 102 according to the image obtaining device 103, and then track the position and posture of the controller, and determine the controller Six degrees of freedom tracking data.
  • the application scenario includes a tracking processor and an image acquisition device, as well as any one of a bracelet, a ring, and a watch.
  • the bracelet, ring, or watch carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes a plurality of light points, so as to realize the tracking of the bracelet, ring or watch.
  • FIG. 8 is a schematic flowchart of a tracking method for a controller provided by an embodiment of the application.
  • the controller carries a multi-point light-emitting unit.
  • the execution subject of this embodiment may be the controller in the embodiment shown in FIG. 6 Trace the processor.
  • the method may include:
  • S301 Obtain, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image.
  • the aforementioned image acquisition device may be a monocular, binocular, or multi-lens camera that comes with the all-in-one machine.
  • each camera independently acquires the controller during the movement process. Transformation sequence image of multi-point lighting unit.
  • Binocular or multi-eye cameras can expand the tracking range, but this embodiment is also applicable to monocular cameras. Take a monocular camera as an example.
  • the camera captures an image of a multi-point light-emitting unit during the movement of the controller.
  • the multi-point light-emitting unit includes multiple light points. In this embodiment, the number of light points is greater than or equal to 4. The number can be set according to the actual application scenario.
  • the first light point transformation method is RGBRGB; the second light point transformation The mode is RRGGBB; the third light point conversion mode is 101010; the fourth light point conversion mode is 110011, where RGB represents red, green, and blue respectively, and 1, 0 represents light and dark respectively.
  • the color conversion is not limited to red, green and blue, and can be red, orange, yellow, green, blue, blue, and purple;
  • the brightness level is not limited to full bright and full dark, and can be multiple brightness levels, such as full brightness, 3/4 brightness, and half brightness. Bright, 1/4 bright, dark, etc., the conversion method can also include color conversion and brightness level conversion at the same time.
  • the conversion sequence image of the multi-point light-emitting unit is acquired by the image acquisition device, and the conversion mode of the light point in the sequence image can be determined according to the information such as the color and brightness level of the light spots in the sequence image.
  • this embodiment does not limit the implementation of obtaining information such as the color and brightness level of the light spots in the sequence image.
  • the difference threshold of the color value of each preset color can be set according to the actual situation. If the difference between the color value and the color value of a preset color is less than the first preset difference threshold, then the color of the light spot is the preset color; similarly, the brightness level of each brightness level can be set according to the actual situation.
  • the difference threshold of the spot diameter If the difference between the brightness value of the spot and the spot diameter of a certain brightness level is less than the second preset difference threshold, the brightness of the spot is the brightness level.
  • the change frequency of the above-mentioned multi-point light-emitting unit can be set according to the actual application scenario.
  • the shooting frequency of the image acquisition device should be consistent with the change frequency of the multi-point light-emitting unit, so that the shooting of the image acquisition device is synchronized with the conversion of the multi-point light-emitting unit.
  • the image acquisition device can precisely capture the transformation of each lamp in the multi-point light-emitting unit, so that the transformation mode of the light points in the sequence image can be accurately determined.
  • S302 Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
  • the above-mentioned multi-point light-emitting unit includes a plurality of light points, and each light point is transformed according to a different transformation method.
  • the corresponding light point in the sequence image can be determined according to the transformation method of the light point in the sequence image.
  • the number of target light points can be set according to actual application scenarios. For example, when the number of light points of the multi-point light-emitting unit is small, the number of target light points can be all light points of the multi-point light-emitting unit; When the number of light spots is large, the number of target light spots may be part of the light spots in the multi-point light-emitting unit.
  • the selection of the target light point can also be set according to actual application scenarios, for example, during the movement of the controller, a light point that is always in a place that can be photographed by the image acquisition device.
  • S303 Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
  • S304 Obtain six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
  • mapping position Input the above-mentioned mapping position, the initial position of the target light spot and the position of the image acquisition device during the movement of the controller into OpenCV, then the position of the target light spot during the movement of the controller can be obtained, and the position and posture of the controller can be determined .
  • the tracking method of the controller provided by the application embodiment, the controller carries a multi-point light-emitting unit, and the method acquires the transformed sequence image of the above-mentioned multi-point light-emitting unit during the movement of the controller through an image acquisition device, and determines that the sequence image is The light point transformation method; here, the transformation method of each light point is different, so the embodiment of the present application can accurately determine the identification corresponding to the target light point in the sequence image according to the light point transformation method; and then based on the above-mentioned target light point correspondence To determine the mapping position of the target light spot in each frame of the image sequence; according to the above mapping position and the initial position of the target light spot, the target light spot relative to the image acquisition device during the movement of the controller is obtained According to the position and the position of the image acquisition device during the movement of the controller, the position of the target light spot is obtained.
  • the position of the controller can determine the three-dimensional space position and rotation attitude of the controller, realize the six-degree-of-freedom tracking of the controller, and improve the interaction between the user and the surrounding environment.
  • the tracking method of the controller provided in the embodiments of the present application does not require installation of additional devices, such as laser detection devices required for laser positioning, etc., thereby saving cost and space.
  • FIG. 9 is a schematic flowchart of another tracking method of a controller provided by an embodiment of the application.
  • the controller carries a multi-point light-emitting unit.
  • the execution subject of this embodiment may be as shown in FIG. 6
  • the method includes:
  • S402 Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
  • S403 Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
  • S401-S403 is the same as the foregoing S301-S303, and will not be repeated here.
  • mapping position the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller, it is only possible to determine the position and posture of the controller when each frame of image is taken, resulting in tracking problems.
  • the data is not smooth, and the above tracking method has the problem of delay.
  • the update rate of IMU attitude tracking is fast, the delay is lower, and smooth tracking data can be obtained.
  • this embodiment needs to obtain the posture tracking result of the controller sent by the IMU.
  • This embodiment does not limit the sequence of S404 and S401-S403, that is, S404 may be executed first, and then S401-S403, or S401-S403 may be executed first, and then S404 may be executed.
  • obtaining the six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller includes:
  • S4051 Obtain the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
  • the position of the target light spot during the movement of the controller can be obtained, so as to determine the image capture in each frame When, the position and attitude of the controller.
  • the number of the target light points is not less than a preset number
  • the obtaining the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller includes;
  • the position and posture of the controller are obtained.
  • the number of the above-mentioned target light points is not less than the preset number, and the preset number can be set according to actual application scenarios.
  • the number of target light points is at least 4 before the PnP algorithm can be passed.
  • the PnP algorithm is a method for solving 3D to 2D point-pair motion. It describes how to get the pose of the camera when n (n ⁇ 4) 3D space points and their mapping positions are known.
  • the camera pose and n 3D spaces The position of the point is a relative relationship. Therefore, when the pose of the camera and the mapping position of the n 3D space points are known, the position of the n 3D space points can be obtained through the PnP algorithm. Since the three-dimensional geometric structure of the multi-point light-emitting unit in the controller is unchanged, the position of the target light point is obtained, and the three-dimensional space position and rotation attitude of the controller can be determined, thereby obtaining the six-degree-of-freedom tracking data of the controller .
  • the target light is obtained through the PnP algorithm according to the mapping position of the target light point in each frame of the image sequence of each camera and the initial position of the target light point.
  • the positions of multiple groups of target light points are summed or weighted to obtain the position of the target light point, thereby improving the accuracy of obtaining the position of the target light point.
  • S4052 Fuse the position and posture of the controller and the result of posture tracking of the controller sent by the IMU to obtain six-degree-of-freedom tracking data of the controller.
  • the position and posture of the controller and the result of posture tracking of the controller sent by the IMU are input into OpenCV, and mutual compensation, correction, smoothing, and prediction are performed through a preset fusion algorithm to obtain the controller’s Six degrees of freedom tracking data.
  • the advantages of the update rate and smoothness of the posture tracking of the IMU can be fully utilized, and the advantages of The drift and error accumulation in IMU attitude tracking make it difficult to track the position change of the controller, and the problem of 6DOF tracking cannot be achieved.
  • it also solves the problem of image acquisition according to the mapping position, the initial position of the target light spot and the controller during the movement. The location of the device causes the tracking data to be unsmooth and delayed.
  • the controller carries a multi-point light-emitting unit
  • the image acquisition device acquires the transformed sequence images of the above-mentioned multi-point light-emitting unit during the movement of the controller, and determines the transformation mode of the light points in the sequence image;
  • the conversion mode of each light spot is different, so the embodiment of the present application can accurately determine the identification of the target light spot in the sequence image according to the conversion mode of the light spot; and then determine the target light spot based on the identification corresponding to the above-mentioned target light spot
  • the mapping position in each frame of the image in the above sequence of images; according to the mapping position and the initial position of the target light point, the position of the target light point relative to the image acquisition device during the movement of the controller is obtained, and then according to the moving During the process, the position of the target light spot relative to the position of the image acquisition device and the position of the image acquisition device is obtained.
  • the position and posture of the controller can be determined by the position of the point; by fusing the position and posture of the controller and the result of the posture tracking of the controller sent by the IMU, the update rate of the posture tracking of the IMU can be fully utilized.
  • S502 Based on the light points, identify the same points of adjacent frames in the sequence of images.
  • the identifying the same points of adjacent frames in the sequence of images based on the light points includes:
  • the same points of adjacent frames in the sequence of images are identified.
  • u 1 and v 1 are the horizontal and vertical pixel coordinates of the light spot in the previous frame of image
  • u 2 , v 2 are the horizontal and vertical pixel coordinates of the light spot in the next frame of image respectively
  • the preset distance threshold is d 0 , If d 1 ⁇ d 0 , it is judged that this light spot is the same point; otherwise, d 1 >d 0 , it is judged that the two light spots are not the same point.
  • S5041 is executed, and if the same points are not continuous, S5042-S5043 are executed.
  • a light spot in the lamp group is turned to a place that cannot be captured by the image acquisition device, and the image captured at this time does not have the light spot (the light spot in the previous frame of image) There is no such light spot in the next frame of image); sometimes there is a light spot in the light group that could not be taken before, and then reappears (the light spot is not in the latter frame of image, and the light spot in the next frame of image There is the light spot).
  • the same point cannot be found in some images in the sequence image, and the same point is not continuous in the sequence image.
  • the light point of the lamp group has not been transferred to a place that cannot be captured by the image acquisition device. The same point can be found in each frame of the sequence image, and the same point is continuous in the sequence image.
  • S5041 Obtain an identifier corresponding to the target light point in the sequence image according to the initial identifier of the light point.
  • the light point corresponding to the light point in each frame of the sequence image can be directly determined according to the initial identification of the light point. It is not necessary to obtain the mark corresponding to the target light point in the sequence image through the light point conversion method, which simplifies the operation process and improves the tracking efficiency.
  • S5042 Obtain, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image.
  • the same point is not continuous in the sequence image, then the same point before and after the disconnection cannot be found in the sequence image, and then it is impossible to determine the same point in the sequence image according to the initial identification of the light point.
  • the mark corresponding to the light spot in the image after the light spot is disconnected. Therefore, for the light spot that was not photographed before and reappears, when it reappears, it is necessary to obtain the sequence image through the light spot transformation method.
  • the identifier corresponding to the target light spot is necessary to obtain the sequence image through the light spot transformation method.
  • the light spot is an LED light spot
  • the conversion method includes color conversion and/or brightness level conversion
  • the determination of the transformation mode of the light points in the sequence image may be implemented in the following manners:
  • the color and/or brightness level of the same point determine the color transformation and/or brightness level transformation of the same point in a set of sequence images.
  • the same point when it reappears, based on the color and/or brightness level of the same point, determine the color transformation and/or brightness of the same point in a set of sequence images
  • Level conversion in which the number of frames of a group of sequential images is related to the conversion period of the light spot, for example, the conversion period of the light spot is four times a period, then a group of sequential images is a continuous four-frame image.
  • the light spot is an infrared light spot
  • the conversion method includes infrared light-dark level conversion
  • the determination of the transformation mode of the light points in the sequence image may also be implemented in the following manner:
  • the infrared light-dark level transformation of the same point in a set of sequence images is obtained.
  • S5043 Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
  • the obtaining the identifier corresponding to the target light point in the sequence image according to the light point transformation mode includes:
  • the identification corresponding to the target light spot in the sequence image is obtained.
  • a preset conversion method that is the same as the color conversion method of the same point in a set of sequence images is found, and the identifier corresponding to the preset conversion method is These similarities correspond to the identification.
  • the color of the same point in a set of sequence images is transformed into RGBRGB, and the corresponding identification of the preset transformation method RGBRGB is the first light point, then the same point is the first light point;
  • the color transformation of the same point in a set of serial images It is RRGGBB, and the identifier corresponding to the preset transformation method RRGGBB is the second light spot, then the same point is the second light spot;
  • the brightness level of the same point in a set of sequence images is transformed to 101010, and the identifier corresponding to the preset transformation method 101010 If it is the third light point, then the same point is the third light point, where RGB represents red, green and blue respectively, and 1, 0 represents light and dark respectively.
  • the identification corresponding to the target light spot in the sequence image can be determined more accurately and conveniently.
  • S505 Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
  • S506 Obtain six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
  • S505-S506 is the same as the above-mentioned S303-S304, and will not be repeated here.
  • the tracking method of the controller provided by the embodiment of the present application, the controller carries a multi-point light-emitting unit, the method extracts light points in a sequence image, and based on the light points, identifies the same point in adjacent frames in the sequence image, if The same point is continuous, that is, for a light point, the light point can be found in each frame of the sequence image, then the light point in each frame of the sequence image can be directly determined according to the initial identification of the light point
  • the corresponding mark does not need to use the light point transformation method to obtain the mark corresponding to the target light point in the sequence image, which simplifies the operation process and improves the tracking efficiency; if the same point is not continuous in the sequence image, it is required when it reappears
  • the identification of the target light spot in the sequence image is determined by the light point transformation method; the image acquisition device is used to acquire the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and the light point transformation method in the sequence image is determined ;
  • the position of the point relative to the image acquisition device and then according to the position of the target light spot relative to the image acquisition device and the position of the image acquisition device during the movement, the position of the target light spot is obtained, due to the multi-point light emission in the controller
  • the three-dimensional geometric structure of the unit is unchanged.
  • FIG. 11 is a schematic structural diagram of a tracking device for a controller provided in an embodiment of the application.
  • the tracking device 60 of the controller includes: a first determining module 601, a first obtaining module 602, a second determining module 603, and a second obtaining module 604.
  • the first determining module 601 is configured to acquire, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image;
  • the second determining module 603 is configured to determine the mapping position of the target light point in each frame of the image sequence based on the identifier corresponding to the target light point;
  • the second obtaining module 604 is configured to obtain the six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image obtaining device during the movement of the controller .
  • the device provided in the embodiment of the present application can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details of the embodiments of the present application are not repeated here.
  • FIG. 12 is a schematic structural diagram of another tracking device for a controller provided by an embodiment of the application.
  • the tracking device 60 of the controller provided in this embodiment on the basis of the embodiment in FIG. 11, further includes: an acquisition module 605 and a processing module 606.
  • the obtaining module 605 is configured to, before the second obtaining module 604 obtains the six-degree-of-freedom tracking data of the controller,
  • the second obtaining module 604 obtains the six-degree-of-freedom tracking data of the controller, including:
  • the position and posture of the controller and the result of posture tracking of the controller sent by the IMU are merged to obtain the six-degree-of-freedom tracking data of the controller.
  • the processing module 606 is configured to extract light points in the sequence of images
  • the first obtaining module 602 obtains the identifier corresponding to the target light point in the sequence image according to the initial identifier of the light point;
  • the first determining module 601 executes the acquisition of the transformed sequence image of the multi-point light-emitting unit during the movement of the controller according to the image acquisition device, and determines the position of the light point in the sequence image. Steps to change the way.
  • the light spot is an LED light spot
  • the conversion method includes color conversion and/or brightness level conversion
  • the first determining module 601 determines the transformation mode of light points in the sequence image, including:
  • the color and/or brightness level of the same point determine the color transformation and/or brightness level transformation of the same point in a set of sequence images.
  • the light spot is an infrared light spot
  • the conversion method includes infrared light-dark level conversion
  • the first determining module 601 determines the transformation mode of light points in the sequence image, including:
  • the infrared light-dark level transformation of the same point in a set of sequence images is obtained.
  • the processing module 606 recognizes the same points of adjacent frames in the sequence of images based on the light points, including:
  • the first obtaining module 602 obtains the identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point, including:
  • the number of the target light points is not less than a preset number
  • the second obtaining module 604 obtains the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image obtaining device during the movement of the controller, including;
  • the position and posture of the controller are obtained.
  • the device provided in the embodiment of the present application can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details of the embodiments of the present application are not repeated here.
  • FIG. 13 is a schematic diagram of the hardware structure of the tracking device of the controller provided by an embodiment of the application.
  • the tracking device 80 of the controller in this embodiment includes: a processor 801 and a memory 802; wherein
  • the memory 802 is used to store computer execution instructions
  • the tracking device further includes a bus 803 for connecting the memory 802 and the processor 801.
  • the embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores computer-executable instructions.
  • the processor executes the computer-executed instructions, the tracking method of the controller as described above is implemented.
  • FIG. 14 is a schematic structural diagram of a VR system provided by an embodiment of the application.
  • the VR system 90 of this embodiment includes: an all-in-one machine 901 and a controller 902.
  • the all-in-one machine 901 is provided with a tracking processor 9011 of a controller, an image acquisition device 9012 and the like.
  • the controller 902 carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes a plurality of light points.
  • a possible use form of the controller is a handle.
  • the tracking processor 9011 of the controller is configured to perform the above-mentioned method.
  • an IMU9021 is provided on the aforementioned controller.
  • An embodiment of the present application also provides an AR system, including: an all-in-one machine and a controller.
  • the all-in-one machine is provided with a tracking processor of the controller, an image acquisition device, and the like.
  • the controller carries a multi-point light-emitting unit, the multi-point light-emitting unit includes a plurality of light points, and a possible use form of the controller is a handle.
  • an IMU is provided on the aforementioned controller.
  • the tracking processor of the controller can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details are not repeated here in the embodiments of the present application.
  • An embodiment of the present application also provides an XR system, including: an all-in-one machine and a controller.
  • the all-in-one machine is provided with a tracking processor of the controller, an image acquisition device, and the like.
  • the controller carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes multiple light points.
  • a possible use form of the controller is a handle.
  • an IMU is provided on the aforementioned controller.
  • the tracking processor of the controller can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details are not repeated here in the embodiments of the present application.
  • Virtual reality helmet VR display has a wide range of applications in various industries such as education and training, fire drills, virtual driving, real estate and other projects.
  • users often need to input some information to achieve VR interaction. For example, enter the account number and password; or adjust the volume and screen size while watching audio and video.
  • the relevant interactive methods in the VR field mainly include:
  • Head movement hovering the input mode of the shell-type virtual reality head-mounted display device mostly depends on the inertial sensing unit in the mobile phone.
  • the user controls the cursor position movement by turning the head, and when the cursor moves to the option to be selected (Such as confirm, return, music, video) hovering for a certain period of time (such as 3s or 5s) as a selection confirmation operation, although this method is simple, sometimes there is no response and the length of time is not well defined. If the time definition is too short, it is easy to cause misoperation. If the time definition is too long, the operation efficiency is low and it is easy to cause misunderstanding, disgust and impatientness of the user, and the user experience is very poor.
  • Speech recognition This method can be simple and effective to achieve interaction, but sometimes ambiguity occurs, and the effect of speech semantic recognition is poor, especially when the humanities of the region are different, and the dialects and standard dialects of each place may have certain differences. Moreover, this method is not suitable for deaf-mute people and has certain limitations.
  • this method can be regarded as a better interaction method, but the gesture operation has certain restrictions, the interaction process has strong pertinence, and the application of this technology is not mature enough , It is very difficult to operate the VR system proficiently, and the accuracy and flexibility are relatively poor. At the same time, it is very tiring to hold the arm in the camera's field of view for a long time, and the user experience is poor.
  • Fig. 15 is a scene diagram of a virtual reality interaction provided by related technologies.
  • the application scenario includes a helmet 11 and a peripheral device 12; among them, the user wears the helmet 11, and the helmet 11 can render a VR interface in the virtual scene, and the user can display the VR interface in the virtual scene through the peripheral device 12 Interact with the VR interface.
  • the peripheral device is equivalent to a prop in the game in the VR interface, such as swords, guns, etc.
  • the user inputs information to the VR game interface through the peripheral device, thereby controlling the props in the VR game interface.
  • VR virtual reality
  • head motion hovering voice recognition
  • binocular recognition gestures users can also interact with VR through VR’s built-in touchpad, head motion hovering, voice recognition, and binocular recognition gestures, but these VR input methods generally suffer from inconvenience and sensitivity. Good, low accuracy, poor response. In particular, it is inconvenient to input passwords when making payments, entering text, or logging in to an account.
  • the embodiments of the present application provide a virtual reality control device.
  • the user can control
  • the user’s touch operation has position information.
  • the user’s operation position in the VR interface can be determined.
  • the user can be like It is very convenient to operate as usual on a computer screen or a mobile phone screen.
  • FIG. 16 is a control logic diagram of a virtual reality control device provided by an embodiment of the application.
  • the embodiments of the present application provide a virtual reality control device in response to the above technical problems of related technologies.
  • the virtual reality control device includes: a touch interface 21-1, a controller 22-1, and a communicator 23 -1;
  • the touch interface 21-1 is used to receive user touch operation information.
  • the controller 22-1 connected to the touch interface 21-1, is used to determine the user's touch operation and operation position information on the touch interface based on the touch operation information.
  • the communicator 23-1 connected to the controller 22-1, is used to send the touch operation and operating position information to the processor of the helmet, so that the processor of the helmet is displayed on the VR interface rendered by the control device and the processor of the helmet
  • the preset mapping relationship of the position points and the operation position information determine the corresponding position in the VR interface, and perform the operation behavior corresponding to the touch operation at the corresponding position.
  • the communication mode of the communicator may be wired communication or wireless communication, which is not specifically limited in this embodiment.
  • a touch interface is provided on the control device, and the touch interface has multiple touch position points, and the VR interface rendered by the helmet processor also has multiple VR position points.
  • the preset mapping relationship between the position points on the touch interface and the position points on the VR interface can map the position points on the touch interface to the VR interface, and according to the user's operation behavior on the touch interface, the VR helmet Perform the corresponding operation at the corresponding position in the VR interface. For example, as shown in Figure 17, the A position point on the touch interface 21-1 corresponds to the B position point on the VR interface. If the user selects the A position point, then the B position point on the VR interface is also The selected operation will be performed.
  • the user moves from the A1 point on the touch interface 21-1 along the track a1 (shown by the arc in the touch interface in Figure 18) to the A2 point, if the VR interface The B1 position point on the above corresponds to the A1 position point, the B2 position point corresponds to the A2 position point, and the trajectory v1 (as shown by the arc in the VR interface in Figure 18) corresponds to a1, then the VR interface will also execute from the B1 position point The operation of moving along the trajectory v1 to the position B2.
  • the position information of the control device may also change at any time.
  • the inertial measurement unit 24-1 can also be used to measure the acceleration information of the control device and send it to the controller 22. -1; and make the controller 22-1 calculate the position information of the control device based on the acceleration information, and then send the position information to the processor of the helmet, so that the processor of the helmet adjusts the position information of the rendered VR interface based on the position information.
  • the inertial measurement unit 24-1 can also collect acceleration information and angular velocity information of the control device at the same time, and send them to the controller 22-1; and make the controller 22-1 calculate the position information of the control device based on the acceleration information, And calculate the attitude information of the control device based on angular velocity information; then send the position information and attitude information to the processor of the helmet together, so that the processor of the helmet adjusts the position information of the rendered VR interface based on the position information, and adjusts based on the attitude information The pose of the rendered VR interface.
  • the attitude angle of the control device can be measured, and the attitude angle includes a roll angle (roll), a pitch angle (pitch), and a yaw angle (yaw).
  • roll roll
  • pitch pitch
  • yaw yaw angle
  • Figure 20 taking an airplane as an example, taking a point on the airplane as the origin, and taking the length of the airplane's fuselage as the Y axis, the horizontal plane perpendicular to the Y axis as the X axis, and the gravity direction as the Z axis.
  • the pitch angle refers to the angle produced by rotating around the X axis
  • yaw refers to the angle produced by rotating around the Y axis
  • roll refers to the angle produced by rotating around the Z axis.
  • the inertial measurement unit includes a gyroscope, an accelerometer, and a magnetometer; among them, the gyroscope is used to measure the angular velocity, and the attitude angle can be obtained by integrating the angular velocity.
  • the acceleration and gravity information of the control device can also be measured by the accelerometer.
  • the acceleration information can be used to correct the attitude angle deviation related to the direction of gravity, that is, the acceleration information can be used to correct the roll and pitch angles.
  • the angle deviation The measured data of the magnetometer can be calculated to obtain the yaw angle (yaw), and then the posture information can be corrected.
  • the control device of the embodiment of the present application can provide multiple control modes, and in different control modes, users can have different interactive experiences.
  • the control device corresponds to the first control mode; the control device may also be used to obtain the user’s status on the touch interface 21-1 when it is detected that the user selects the first control mode. And send it to the processor of the helmet so that the processor of the helmet determines the correspondence of the VR interface based on the touch position information and the preset mapping relationship between the position points of the VR interface rendered by the control device and the processor of the helmet Position, and move the cursor on the VR interface rendered in the virtual scene to the corresponding position.
  • the processor of the helmet renders the VR interface in the virtual scene, and the user can touch on the touch interface, just like a finger touching on the screen of a mobile phone, and the user’s touch action on the touch interface will generate a touch.
  • Location information is used to indicate the user's touch position point on the touch interface. According to the touch position point and the preset mapping relationship between the position point on the touch interface of the touch interface and the position point of the VR interface, The corresponding location point of the VR interface can be determined.
  • touch operation information can be formed and sent to the helmet.
  • the helmet can determine the VR master based on the touch operation information and the preset mapping relationship. The corresponding position on the interface.
  • the first control mode is for the user, the control device is like a mouse.
  • the difference from the mouse is that the user moves the position of the VR interface by touching the control device, which can be understood as The touch movement on the control device is equivalent to using a mouse to move.
  • control device corresponds to a second control mode; the control device can also be used to obtain the user's touch on the touch interface when it is detected that the user selects the second control mode.
  • the control operation is sent to the processor of the helmet, so that the processor of the helmet performs corresponding operations on the corresponding position based on the user's touch operation.
  • the VR interface rendered by the processor of the helmet is located on the touch interface.
  • the user touches on the touch interface it is like touching on the screen of a mobile phone.
  • the difference from the first control mode is that the user can directly perform a click operation at any touch position, thereby realizing a confirmation operation.
  • the VR interface rendered on the control device is invisible, and is only visible to the user wearing the VR helmet. When the user enters the account and password, it will be very safe.
  • the touch interface and the VR interface can be in a preset ratio of 1:1.
  • the measurement based on the inertial measurement unit can be obtained in real time.
  • the position information and posture information obtained from the data are adapted to adjust the position information and posture information of the VR interface based on the position information and posture information of the control device, so that the VR interface can change correspondingly with the change of the position and posture of the touch interface.
  • the first-point touch operation in the above two control modes, that is to say, when the user's touch operation is detected, the corresponding cursor, such as an arrow and a circle, will be displayed on the VR interface.
  • the corresponding cursor such as an arrow and a circle
  • a mode selection function can also be provided to the user.
  • the user can freely choose whether to use the first control mode or the second control mode.
  • a mode button may be set on the control device, and the user selects to use the first control mode or the second control mode by operating the mode button.
  • the first mode button and the second mode button can be respectively set on the control device.
  • the first mode button it means that the user chooses to use the first control mode
  • the second mode button it means that the user chooses to use the second control mode.
  • a mode button can be provided on the control device, and the user can choose to use the first control mode or the second control mode by performing different operations on the mode button.
  • long press and short press are relative terms, that is, the time of short press is shorter than long press. For example, if the user presses the mode button and then releases it, it means that the user selects the first control mode. If the user presses the mode button for more than 5 seconds, it means that the user selects the second control mode.
  • voice control can also be used to select the first control mode or the second control mode.
  • the user performs voice control by issuing a voice command "enter the first control mode" or "enter the second control mode".
  • gesture control can also be used to select the first control mode or the second control mode. For example, a pulling gesture from bottom to top is to open the first control mode; a pulling gesture from left to right is to open the second control mode. Of course, other gestures can also be set to select the first control mode or the second control mode. This embodiment will not repeat them one by one again.
  • the touch interface can adopt capacitive touch or infrared touch.
  • the capacitive touch can refer to the existing capacitive touch technology, which will not be repeated here.
  • the infrared light-emitting diodes and photodiodes arranged alternately are arranged on one side of the control device, and the control device also includes a control unit.
  • the area shown by the square or rectangular dashed box in the figure is a touchable area.
  • the light-emitting diode is used to emit infrared light.
  • the photodiode When a touch object is touched in the touchable area, such as a finger, the photodiode will receive the reflected light of the touch object to form an optical network. According to the emission and reception of infrared light , And the algorithm preset in the control unit performs calculations to determine the location of the touch point.
  • the control unit will continuously scan the infrared emitting tube and infrared receiving tube.
  • the human hand touches the touchable area
  • the infrared light emitted by the infrared light-emitting diode meets the human hand, a part of the light will be reflected.
  • a certain angle of photodiode is used to receive the infrared light reflected by the human hand, and then the control unit analyzes and calculates the position of the touch point according to the changes in the signal of the transmitting part and the receiving part.
  • the processor of the helmet can make a synchronous response corresponding to the operation of the contact on the touch interface by the user. For example, when a user chooses to watch an immersive video on the main interface of VR, he controls the device to adjust the volume and the speed of the progress. When playing the game, call the projection interface to select resources and so on. For users, it is just like inputting information on the touch screen of a mobile phone, which conforms to the user's natural habits and can directly use the application without training.
  • the embodiments of the present application may further include: detecting whether to exit the control mode, and if it is detected that the user wants to exit the control mode, long press the mode button again to exit the control mode. Or in the control mode, long time no touch operation can automatically exit the control mode.
  • the foregoing embodiment introduces a control device used for VR interaction in a virtual reality scene.
  • the following will introduce a helmet used for VR interaction in a virtual reality scene.
  • Fig. 26 is a schematic diagram of the control logic of a helmet provided by an embodiment of the present application.
  • the helmet provided by the embodiment of the present application includes: a processor 121-1 connected to a control device for rendering the VR interface, and when receiving the user's touch operation and operation position information on the control device In this case, according to the preset mapping relationship between the control device and the position point on the VR interface, and the received operation position information on the control device, determine the corresponding position in the VR interface, and perform the corresponding touch operation behavior at the corresponding position Operation behavior.
  • the processor 121-1 of the embodiment of the present application is also used to obtain the position information and posture information of the control device, and based on the position information and posture information, determine the projection position and the projection position of the VR main interface to be displayed. Posture information, and based on the projection position and posture information of the VR main interface to be displayed, the posture information determined by the VR main interface to be displayed is projected to the corresponding position.
  • the helmet of the embodiment of the present application further includes: a camera 122-1 for collecting images including control equipment; a processor 121-1 connected to the camera 122- 1. It is used to determine the location information of the control device based on the image.
  • the location information of the control device refers to the location information of the control device in the world coordinate system.
  • the processor determines the location information of the control device based on the image collected by the camera. Specifically, the image processing method is adopted. First, it is determined that the control device is located The position information in the image coordinate system is then converted to the world coordinate system based on the conversion relationship between the image coordinate system and the world coordinate system, so as to obtain the position information of the control device.
  • the conversion relationship between the image coordinate system and the world coordinate system reference may be made to the introduction of related technologies, which will not be repeated in this embodiment.
  • the processor 121-1 is further configured to obtain the position information and attitude information of the control device from the control device; the position information and attitude information of the control device are based on the acceleration information and the attitude information of the control device, respectively.
  • the angular velocity information is calculated.
  • the acceleration information and angular velocity information measured by the inertial measurement unit of the control device are respectively integrated to obtain position information and posture information, and then sent to the processor of the helmet through the communicator of the control device.
  • the processor 121-1 is also used to render the VR interface in the virtual scene, and in the case of receiving the user's touch position information on the control device, based on the touch position information, and
  • the preset mapping relationship between the control device and the position point of the VR interface rendered by the processor 121-1 determines the corresponding position of the VR interface, and moves the cursor on the VR interface rendered in the virtual scene to the corresponding position.
  • the processor 121-1 is further configured to perform a confirmation operation on the current object on the VR interface in the case of receiving the confirmation key information of the user on the control device.
  • the processor 121-1 is also used to render the VR interface on the touch interface, and in the case of receiving the user’s touch operation on the touch interface, according to the user’s Touch operation, perform corresponding operations at the corresponding position on the VR interface.
  • the introduction of the control device part for the specific implementations of the first control mode and the second control mode, please refer to the introduction of the control device part, which will not be repeated here.
  • the introduction of the function of the helmet by the control device part can also be applied to the helmet of this embodiment, and this embodiment will not be repeated here.
  • a virtual reality interactive system may also be provided.
  • the interactive system includes a virtual reality control device 131 and a helmet 132, wherein the control device 131 For the helmet 132, please refer to the introduction of the foregoing embodiment, which will not be repeated here.
  • FIG. 28 is a flowchart of a virtual reality interaction method provided by an embodiment of the application.
  • the virtual reality interaction method provided by this embodiment specifically includes the following steps:
  • Step 1401 Receive touch operation information of the user.
  • the execution subject of this embodiment may be the control device in the foregoing embodiment.
  • the user's touch operation information refers to the touch operation information on the control device.
  • Step 1402 based on the touch operation information, determine the user's touch operation and the operation position information on the control device.
  • touch operation refers to touch actions, such as operation behaviors such as moving, sliding, touching, and clicking
  • operation position information refers to position information of the touch action on the control device.
  • Step 1403 Send the touch operation and operation position information to the processor of the helmet, so that the processor of the helmet determines the correspondence in the VR interface according to the preset mapping relationship between the control device and the position point on the VR interface, and the operation position information Position, and perform the operation behavior corresponding to the touch operation behavior at the corresponding position.
  • the method of the embodiment of the present application further includes: measuring the angular velocity information of the control device; calculating the attitude information of the control device based on the angular velocity information; sending the attitude information to the processor of the helmet, so that the processor of the helmet is based on the attitude information Adjust the posture information of the rendered VR interface.
  • the method of the embodiment of the present application further includes: measuring the acceleration information of the control device; calculating the position information of the control device based on the acceleration information; sending the position information to the processor of the helmet, so that the processor of the helmet is based on the position information Adjust the position information of the rendered VR interface.
  • control device corresponds to a first control mode
  • the method in this embodiment of the present application further includes: in a case where it is detected that the user selects the first control mode, acquiring the user's touch position information on the touch interface, and sending it to The processor of the helmet, so that the processor of the helmet can determine the corresponding position of the VR interface based on the touch position information and the preset mapping relationship between the position points of the VR interface rendered by the control device and the processor of the helmet, and set it in the virtual scene The cursor on the rendered VR interface moves to the corresponding position.
  • the method of the embodiment of the present application further includes: in the case of detecting that the user selects the first control mode, acquiring the user's confirmation button information on the touch interface, and sending it to the processor of the helmet, so that the helmet The processor performs a confirmation operation on the current object on the VR interface.
  • the touch interface and the VR interface are in a preset proportional relationship.
  • control device corresponds to a second control mode
  • the method in this embodiment of the present application further includes: in the case of detecting that the user selects the second control mode, acquiring the user's touch operation on the touch interface, and sending it to The processor of the helmet enables the processor of the helmet to perform corresponding operations on the corresponding position based on the user's touch operation.
  • the touch interface and the VR interface are in a preset ratio relationship, and the preset ratio relationship is 1:1.
  • control device corresponds to multiple control modes; the method in this embodiment of the present application further includes: receiving user selection information for multiple control modes, and enabling the corresponding control mode based on the selection information; the selection information includes at least one of the following One: mode button selection, voice command selection and gesture selection.
  • the method of the embodiment of the present application further includes: in the case of receiving the user's touch operation information, sending a vibration instruction to the motor, and controlling the motor to vibrate based on the vibration instruction.
  • the virtual reality interaction method of the embodiment shown in FIG. 28 can be used to implement the technical solutions of the foregoing control device embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • Step 1501 Render the VR interface.
  • the execution subject of this embodiment may be the helmet of the above embodiment.
  • the VR interface is rendered by the helmet.
  • Step 1502 in the case of receiving the user's touch operation and operation position information on the control device, according to the preset mapping relationship between the control device and the position point on the VR interface, and the received operation position information on the control device, Determine the corresponding position in the VR interface.
  • Step 1503 Perform an operation behavior corresponding to the touch operation behavior at the corresponding position.
  • step 1502 and step 1503 For the specific implementation process of step 1502 and step 1503, reference may be made to the introduction of the foregoing embodiment, which will not be repeated here.
  • the method of the embodiment of the present application further includes: acquiring position information and posture information of the control device; determining the projection position and posture information of the VR main interface to be displayed based on the position information and posture information; and based on the VR main interface to be displayed
  • the projection position and posture information of the interface, the posture information determined by the VR main interface to be displayed is projected to the corresponding position.
  • the method of the embodiment of the present application further includes: collecting an image including the control device; and determining the location information of the control device based on the image.
  • the method of the embodiment of the present application further includes: acquiring position information and attitude information of the control device from the control device; the position information and attitude information of the control device are calculated based on the acceleration information and angular velocity information of the control device, respectively.
  • the method of the embodiment of the present application further includes: rendering the VR interface in the virtual scene, and in the case of receiving the user's touch position information on the control device, based on the touch position information, and the processing of the control device and the helmet
  • the preset mapping relationship of the position points of the VR interface rendered by the monitor is determined, the corresponding position of the VR interface is determined, and the cursor on the VR interface rendered in the virtual scene is moved to the corresponding position.
  • the method of the embodiment of the present application further includes: in the case of receiving the confirmation key information of the user on the control device, performing a confirmation operation on the current object on the VR interface.
  • the method of the embodiment of the present application further includes: rendering the VR interface on the touch interface, and in the case of receiving the user's touch operation on the touch interface, according to the user's touch operation, in the VR interface Perform the corresponding operation on the corresponding position.
  • the virtual reality interaction method of the embodiment shown in FIG. 29 can be used to implement the technical solution of the foregoing helmet embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 30 is a schematic structural diagram of a virtual reality control device provided by an embodiment of the application.
  • a virtual reality control device provided by an embodiment of the present application can execute the processing flow provided in an embodiment of a virtual reality control method as shown in FIG. 28.
  • a virtual reality control device 160 includes : A memory 161, a processor 162, a computer program, and a communication interface 163; wherein the computer program is stored in the memory 161 and is configured to be executed by the processor 162 to execute the processing provided by the embodiment of the virtual reality control method shown in FIG. 28 Process.
  • the virtual reality control device of the embodiment shown in FIG. 30 can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 31 is a schematic structural diagram of a virtual reality control device provided by an embodiment of this application.
  • a virtual reality control device provided by an embodiment of the present application can execute the processing flow provided in an embodiment of a virtual reality control method as shown in FIG. 29.
  • a virtual reality control device 170 includes : A memory 171, a processor 172, a computer program, and a communication interface 173; wherein the computer program is stored in the memory 171 and is configured to be executed by the processor 172 to perform the processing provided by the embodiment of the virtual reality control method shown in FIG. 29 Process.
  • the virtual reality control device of the embodiment shown in FIG. 31 can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the virtual reality interaction method of the embodiment shown in FIG. 28.
  • an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the virtual reality interaction method of the embodiment shown in FIG. 29.
  • the above-mentioned embodiments can refer to each other and learn from each other, and the same or similar steps and nouns will not be repeated one by one.

Abstract

Provided in the present application are a virtual reality-based controller light ball tracking method and a virtual reality device. The method comprises: according to first posture information of a previous position point of a light ball, determining second posture information of a subsequent position point adjacent to the previous position point; according to the first position information of the previous position point, the first posture information, and the second posture information, determining second position information of the subsequent position point; and according to the second position information, generating and outputting a current display position of a virtual target corresponding to a controller. The described method may improve the accuracy and precision of tracking and positioning a light ball, and may quickly track and position the light ball, improve the interaction speed between a user and a virtual reality environment, and improve the user experience.

Description

基于虚拟现实的控制器光球追踪方法和虚拟现实设备Controller photosphere tracking method based on virtual reality and virtual reality equipment
相关申请交叉引用Cross-reference to related applications
本申请要求于2020年03月27日提交中国专利局、申请号为202010230449.2、申请名称为“基于虚拟现实的控制器光球追踪方法和虚拟现实设备”、2020年03月31日提交中国专利局、申请号为202010246509.X、申请名称为“虚拟现实的控制设备、头盔和交互方法”以及2020年03月27日提交中国专利局、申请号为202010226710.1、申请名称为“控制器的追踪方法及VR系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application is required to be submitted to the Chinese Patent Office on March 27, 2020, the application number is 202010230449. 2, and the application name is "Virtual reality-based controller photosphere tracking method and virtual reality equipment", and it is submitted to the Chinese Patent Office on March 31, 2020 , The application number is 202010246509.X, the application name is "Virtual reality control equipment, helmets and interactive methods" and it was submitted to the Chinese Patent Office on March 27, 2020, the application number is 202010226710.1, and the application name is "Controller tracking method and The priority of the Chinese patent application of "VR system", the entire content of which is incorporated in this application by reference.
技术领域Technical field
本申请涉及仿真技术领域,尤其涉及一种基于虚拟现实的控制器光球追踪方法和虚拟现实设备。This application relates to the field of simulation technology, and in particular to a method for tracking a controller light ball and a virtual reality device based on virtual reality.
背景技术Background technique
随着科技的发展,虚拟现实、增强现实(Augmented Reality,简称AR)、混合现实(Mixed Reality,简称MR)和扩展现实(Extended reality,简称XR)等技术得到了迅速发展,其被应用到各行各业,例如三维游戏、军事中模拟训练、医学中模拟手术等。VR、AR、MR和XR等系统中一般包括头盔和控制器,通过对控制器追踪操纵虚拟世界中的物体,从而使用户通过控制控制器运动与周围环境进行互动。With the development of science and technology, technologies such as virtual reality, augmented reality (AR), mixed reality (MR) and extended reality (XR) have developed rapidly, and they have been applied to various industries. Various industries, such as three-dimensional games, military simulation training, medical simulation surgery, etc. VR, AR, MR, and XR systems generally include helmets and controllers. By tracking and manipulating objects in the virtual world on the controller, users can interact with the surrounding environment by controlling the movement of the controller.
控制器也可以称为手柄;控制器可以发出光球;进而需要追球光球位置,去完成目标的定位,以完成虚拟现实的操作。其中如何定位和追踪控制器是行业内需要解决的技术问题。The controller can also be called a handle; the controller can emit a ball of light; and then it is necessary to chase the position of the ball of light to complete the positioning of the target to complete the operation of virtual reality. How to locate and track the controller is a technical problem that needs to be solved in the industry.
发明内容Summary of the invention
本申请提供一种基于虚拟现实的控制器光球追踪方法和虚拟现实设备,用以解决现有光球追踪技术中存在的定位错误或延迟的问题。The present application provides a controller photosphere tracking method and virtual reality equipment based on virtual reality to solve the problem of positioning error or delay in the existing photosphere tracking technology.
第一方面,本申请提供一种基于虚拟现实的控制器光球追踪方法,所述方法包括:In the first aspect, the present application provides a method for tracking a controller photosphere based on virtual reality. The method includes:
根据光球的前一个位置点的第一姿态信息,确定与所述前一个位置点相邻的后一个位置点的第二姿态信息;Determine, according to the first posture information of the previous location point of the photosphere, the second posture information of the next location point adjacent to the previous location point;
根据所述前一个位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定所述后一个位置点的第二位置信息;Determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point;
根据所述第二位置信息,生成并输出与所述控制器对应的虚拟目标的当前显示位置。According to the second position information, the current display position of the virtual target corresponding to the controller is generated and output.
第二方面,本申请提供一种基于虚拟现实的控制器光球追踪装置,所述装置包括:In a second aspect, the present application provides a virtual reality-based controller photosphere tracking device, which includes:
第一处理单元,用于根据光球的前一个位置点的第一姿态信息,确定与所述前一个位置点相邻的后一个位置点的第二姿态信息;The first processing unit is configured to determine the second posture information of the next location point adjacent to the previous location point according to the first posture information of the previous location point of the photosphere;
第二处理单元,用于根据所述前一个位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定所述后一个位置点的第二位置信息;A second processing unit, configured to determine second location information of the next location point according to the first location information of the previous location point, the first posture information, and the second posture information;
第三处理单元,用于根据所述第二位置信息,生成并输出与所述控制器对应的虚拟目标的当前显示位置。The third processing unit is configured to generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
第三方面,本申请提供一种电子设备,包括:In the third aspect, this application provides an electronic device, including:
至少一个处理器;以及At least one processor; and
与所述至少一个处理器通信连接的存储器;其中,A memory communicatively connected with the at least one processor; wherein,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行第一方面中任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of the first aspects .
第四方面,本申请提供一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行第一方面中任一项所述的方法。In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method described in any one of the first aspect.
第五方面,本申请提供一种虚拟现实设备,所述虚拟现实设备包括:In a fifth aspect, this application provides a virtual reality device, the virtual reality device including:
显示屏,所述显示屏用于显示图像;A display screen, which is used to display images;
处理器,所述处理器被配置为:A processor, the processor is configured to:
根据控制器上光球的前一位置点的第一姿态信息,确定与所述前一位置点相邻的后一个位置点的第二姿态信息;Determining, according to the first posture information of the previous position point of the photosphere on the controller, the second posture information of the next position point adjacent to the previous position point;
根据前一位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定后一个位置点的第二位置信息;Determine the second location information of the next location point according to the first location information of the previous location point, the first posture information, and the second posture information;
根据所述第二位置信息,确定控制器的位置,进而实现画面的显示。According to the second location information, the location of the controller is determined, and then the screen is displayed.
本申请提供的基于虚拟现实的控制器光球追踪方法,通过根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息;根据前一个位置点的第一位置信息、第一姿态信息和第二姿态信息,确定后一个位置点的第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。利用光球在前一个位置点的位置信息和姿态信息,来预测光球在后一个位置点的位置信息,有效克服了图像处理方式易受光球所处环境的背景颜色影响的问题,能够提高光球追踪定位的准确度和精度,同时,由于无需在后一个位置点处对光球进行图像采集和图像处理,避免了图像采集和图像处理过程导致的延迟、卡顿等问题,能 够快速对光球进行追踪定位,提高用户与虚拟现实环境的交互速度,提升用户体验。According to the virtual reality-based controller photoball tracking method provided in the present application, the second posture information of the next location point adjacent to the previous location point is determined according to the first posture information of the previous location point of the photosphere; The first position information, the first posture information and the second posture information of the previous position point are determined, and the second position information of the next position point is determined; according to the second position information, the current display of the virtual target corresponding to the controller is generated and output Location. Using the position information and posture information of the light ball at the previous position to predict the position information of the light ball at the next position, it effectively overcomes the problem that the image processing method is easily affected by the background color of the environment where the light ball is located, and can improve the light The accuracy and precision of the ball tracking and positioning. At the same time, because there is no need to perform image acquisition and image processing on the photosphere at the latter point, it avoids the delay and jamming caused by the image acquisition and image processing process, and can quickly focus on the light. The ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The drawings herein are incorporated into the specification and constitute a part of the specification, show embodiments that conform to the application, and are used together with the specification to explain the principle of the application.
图1为本申请实施例提供的一种基于虚拟现实的控制器光球追踪方法的流程示意图;FIG. 1 is a schematic flowchart of a method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application;
图1a为本申请实施例提供的一种装备了光球的控制器的示意图;Figure 1a is a schematic diagram of a controller equipped with a light ball provided by an embodiment of the application;
图1b为本申请实施例提供的一种光球运动轨迹示意图;FIG. 1b is a schematic diagram of a motion trajectory of a photosphere according to an embodiment of the application;
图2为本申请实施例提供的又一种基于虚拟现实的控制器光球追踪方法的流程示意图;FIG. 2 is a schematic flowchart of another method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application;
图2a为本申请实施例提供的一种人体头部以颈部为中心做上下旋转运动的示意图;Figure 2a is a schematic diagram of a human head doing up and down rotation around the neck according to an embodiment of the application;
图2b为本申请实施例提供的一种人体眼部以头部枕骨为中心做左右旋转运动的示意图;FIG. 2b is a schematic diagram of a human eye doing left and right rotation around the occipital bone of the head according to an embodiment of the application;
图2c为本申请实施例提供的一种人体手臂以手肘为中心做旋转运动的示意图;FIG. 2c is a schematic diagram of a human arm making a rotational movement centered on the elbow according to an embodiment of the application;
图2d为本申请实施例提供的光球由J点移动到K点的示意图;FIG. 2d is a schematic diagram of the light ball moving from point J to point K according to an embodiment of the application;
图3为本申请实施例提供的一种基于虚拟现实的控制器光球追踪装置的结构示意图;3 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application;
图4为本申请实施例提供的又一种基于虚拟现实的控制器光球追踪装置的结构示意图;FIG. 4 is a schematic structural diagram of yet another virtual reality-based controller photosphere tracking device provided by an embodiment of the application;
图5为本申请实施例提供的一种基于虚拟现实的控制器光球追踪设备的结构示意图。FIG. 5 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application.
图6为本申请实施例提供的应用场景示意图;Figure 6 is a schematic diagram of an application scenario provided by an embodiment of the application;
图7为本申请实施例提供的控制器的示意图;FIG. 7 is a schematic diagram of a controller provided by an embodiment of the application;
图8为本申请实施例提供的一种控制器的追踪方法的流程示意图;FIG. 8 is a schematic flowchart of a tracking method for a controller provided by an embodiment of the application;
图9为本申请实施例提供的另一种控制器的追踪方法的流程示意图;FIG. 9 is a schematic flowchart of another method for tracking a controller according to an embodiment of the application;
图10为本申请实施例提供的再一种控制器的追踪方法的流程示意图;FIG. 10 is a schematic flowchart of still another method for tracking a controller according to an embodiment of the application;
图11为本申请实施例提供的一种控制器的追踪装置的结构示意图;FIG. 11 is a schematic structural diagram of a tracking device for a controller provided by an embodiment of the application;
图12为本申请实施例提供的另一种控制器的追踪装置的结构示意图;12 is a schematic structural diagram of another tracking device for a controller provided by an embodiment of the application;
图13为本申请实施例提供的控制器的追踪设备的硬件结构示意图;FIG. 13 is a schematic diagram of the hardware structure of the tracking device of the controller provided by an embodiment of the application;
图14为本申请实施例提供的VR系统的结构示意图。FIG. 14 is a schematic structural diagram of a VR system provided by an embodiment of the application.
图15为相关技术提供的虚拟现实的交互的场景图;FIG. 15 is a scene diagram of virtual reality interaction provided by related technologies;
图16为本申请实施例提供的虚拟现实的控制设备的控制逻辑图;FIG. 16 is a control logic diagram of a virtual reality control device provided by an embodiment of the application;
图17为本申请一示例提供的触控示意图;FIG. 17 is a schematic diagram of touch control provided by an example of this application;
图18为本申请另一示例提供的触控示意图;FIG. 18 is a schematic diagram of touch control provided by another example of this application;
图19为本申请另一实施例提供的虚拟现实的控制设备的控制逻辑图;FIG. 19 is a control logic diagram of a virtual reality control device provided by another embodiment of the application;
图20为本申请实施例提供的姿态角的示意图;FIG. 20 is a schematic diagram of a posture angle provided by an embodiment of the application;
图21为本申请实施例提供的第一控制模式下的触控主界面的示意图;FIG. 21 is a schematic diagram of the touch main interface in the first control mode according to an embodiment of the application;
图22为本申请实施例提供的第一控制模式下的触控原理示意图;FIG. 22 is a schematic diagram of the touch principle in the first control mode provided by an embodiment of the application; FIG.
图23为本申请实施例提供的第二控制模式下的触控原理示意图;FIG. 23 is a schematic diagram of the touch principle in the second control mode provided by an embodiment of the application; FIG.
图24为本申请实施例提供的红外触控的结构示意图;FIG. 24 is a schematic diagram of the infrared touch structure provided by an embodiment of the application;
图25为本申请实施例提供的红外触控的原理示意图;25 is a schematic diagram of the principle of infrared touch provided by an embodiment of the application;
图26为本申请实施例提供的头盔的控制逻辑示意图;FIG. 26 is a schematic diagram of the control logic of the helmet provided by an embodiment of the application;
图27为本申请实施例提供的虚拟现实的交互系统的示意图;FIG. 27 is a schematic diagram of a virtual reality interactive system provided by an embodiment of the application;
图28为本申请实施例提供的虚拟现实的交互方法的流程图;FIG. 28 is a flowchart of a virtual reality interaction method provided by an embodiment of the application;
图29为本申请另一实施例提供的虚拟现实的交互方法的流程图;FIG. 29 is a flowchart of a virtual reality interaction method provided by another embodiment of this application;
图30为本申请实施例提供的虚拟现实的控制设备的框图;FIG. 30 is a block diagram of a virtual reality control device provided by an embodiment of the application;
图31为本申请实施例提供的头盔的框图。Fig. 31 is a block diagram of a helmet provided by an embodiment of the application.
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。Through the above drawings, the specific embodiments of the present application have been shown, which will be described in more detail later. These drawings and text description are not intended to limit the scope of the concept of the present application in any way, but to explain the concept of the present application for those skilled in the art by referring to specific embodiments.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。首先对本申请所涉及的名词进行解释:The exemplary embodiments will be described in detail here, and examples thereof are shown in the accompanying drawings. When the following description refers to the accompanying drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with the present application. On the contrary, they are merely examples of devices and methods consistent with some aspects of the application as detailed in the appended claims. First, explain the terms involved in this application:
光球:虚拟现实技术中用于对目标进行追踪定位的可发光球体,发光色可以是饱和度较高的可见光颜色,也可以是红外光,通常装备在控制器上。Light sphere: A luminous sphere used to track and locate targets in virtual reality technology. The luminous color can be a high-saturation visible light color or infrared light, which is usually equipped on a controller.
姿态:物体在三维空间中的姿态、旋转,用旋转矩阵、欧拉角、四元素来表示。Posture: The posture and rotation of an object in three-dimensional space, expressed by rotation matrix, Euler angles, and four elements.
惯性传感器:一种主要用于检测和测量加速度、倾斜、冲击、振动、旋转和多自由度(DoF)运动的传感器,是解决导航、定向和运动载体控制的重要部件。通常包括“陀螺仪”、“加速度计”和“磁力计”,具体如下:Inertial sensor: A sensor mainly used to detect and measure acceleration, tilt, shock, vibration, rotation and multi-degree-of-freedom (DoF) motion. It is an important component for solving navigation, orientation and motion carrier control. Usually include "gyro", "accelerometer" and "magnetometer", as follows:
(1)陀螺仪,可测得角速度,通过积分角速度即可得到姿态,但积分过程中会产生误差,随着时间的增加,误差会累积,最终导致出现明显的姿态偏差;(1) The gyroscope can measure the angular velocity, and the attitude can be obtained by integrating the angular velocity, but errors will occur during the integration process. As time increases, the errors will accumulate and eventually lead to obvious attitude deviations;
(2)加速度计,可测得设备的加速度,其中包含重力信息,因此,可利用加速度计数据矫正与重力方向相关的姿态偏差,即利用加速度计可矫正roll、pitch的角度偏差;(2) The accelerometer can measure the acceleration of the device, which contains gravity information. Therefore, the accelerometer data can be used to correct the attitude deviation related to the direction of gravity, that is, the accelerometer can be used to correct the angle deviation of roll and pitch;
(3)磁力计,由磁力计可计算得到偏航角(yaw),依此矫正姿态。(3) Magnetometer, the yaw angle (yaw) can be calculated from the magnetometer, and the attitude can be corrected accordingly.
本申请具体的应用场景:随着虚拟现实技术的发展,虚拟现实技术已经应用到生产和生活中。用户可以佩戴虚拟现实设备,进而完成虚拟现实的操作。在虚拟现实设备中会设置有控制器,控制器也可以称为手柄;控制器装备有光球;进而需要追球光球位置,去完成目标的定位,以完成虚拟现实的操作。例如,用户可通过挥动装备有光球的手持控制器来控制虚拟现实世界中的角色完成挥手动作。Specific application scenarios of this application: With the development of virtual reality technology, virtual reality technology has been applied to production and life. Users can wear virtual reality devices to complete virtual reality operations. There will be a controller in the virtual reality device, which can also be called a handle; the controller is equipped with a light ball; furthermore, it is necessary to track the position of the light ball to complete the positioning of the target to complete the virtual reality operation. For example, the user can control the character in the virtual reality world to complete the wave motion by waving a handheld controller equipped with a light ball.
相关技术中,控制器可以发出可见光;可以根据控制器发出的可见光,获取拍摄图像,然后对拍摄图像进行图像处理,得到光球的位置点,进而对目标进行定位。In the related art, the controller can emit visible light; the captured image can be obtained according to the visible light emitted by the controller, and then image processing is performed on the captured image to obtain the position of the photosphere, and then the target can be located.
然而相关技术中,是完全依据图像处理的方式去确定出光球的位置,图像处理的方式容易受到环境因素、图像采集单元本身的因素的干扰,进而造成所得到的光球的位置点是不准确的,从而导致目标定位错误或者定位延迟。例如,当光球的发出的可见光的颜色为红色,光球所处环境中具有红色的背景颜色时,可能会导致图像采集单元无法准确采集到光球的位置,造成对光球定位不准确的问题;同时,图像采集设备需要对光球发出的红色光和红色的背景颜色进行识别区分,导致光球定位较慢,造成卡顿、延迟。However, in the related technology, the position of the photosphere is determined completely based on the image processing method. The image processing method is easily interfered by environmental factors and the factors of the image acquisition unit itself, which may cause the position of the photosphere to be inaccurate. , Resulting in target positioning errors or positioning delays. For example, when the color of the visible light emitted by the photosphere is red and the environment where the photosphere is located has a red background color, it may cause the image acquisition unit to be unable to accurately capture the position of the photosphere, resulting in inaccurate positioning of the photosphere. Problem: At the same time, the image acquisition device needs to distinguish between the red light emitted by the photosphere and the red background color, which results in slower positioning of the photosphere, causing stalls and delays.
本申请提供的基于虚拟现实的控制器光球追踪方法,旨在解决相关技术的如上技术问题。The virtual reality-based controller photosphere tracking method provided in this application aims to solve the above technical problems of related technologies.
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。The technical solution of the present application and how the technical solution of the present application solves the above technical problems will be described in detail below with specific embodiments. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. The embodiments of the present application will be described below in conjunction with the accompanying drawings.
图1为本申请实施例提供的一种基于虚拟现实的控制器光球追踪方法的流程示意图,如图1所示,该方法包括:Fig. 1 is a schematic flowchart of a method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application. As shown in Fig. 1, the method includes:
步骤101、根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息。Step 101: According to the first posture information of the previous location point of the photosphere, determine the second posture information of the next location point adjacent to the previous location point.
在本实施例中,具体地,本实施例的执行主体为终端设备或者设置在终端设备上的服务器、或者控制器、或者其他可以执行本实施例的装置或设备,本实施例以执行主体为设置在终端设备上的应用软件为例进行说明,此处的终端设备可以是VR设备。In this embodiment, specifically, the execution subject of this embodiment is a terminal device or a server or a controller set on the terminal device, or other devices or devices that can execute this embodiment, and this embodiment takes the execution subject as The application software set on the terminal device is taken as an example for description. The terminal device here may be a VR device.
虚拟现实技术中,通常利用可发光的光球对一个空间范围内的运动目标进行定位追踪,例如,可以利用用户持有或佩戴的带有可发光光球的控制器来实现对用户运动或动作的追踪定位,光球的位置即为用户的位置或者用户佩戴有光球的身体部位的位置,光球的运动轨迹即为用户的运动轨迹或者用户佩戴有光球的身体部位的动作轨迹。图1a为本实施例提供的一种装备了光球的控制器的示意图,如图1a所示,控制器可以装备有不同颜色的光球,不同颜色的光球能够代表不同的用户或者用户不同的身体部位。In virtual reality technology, luminous light balls are usually used to locate and track moving targets in a spatial range. For example, a controller with luminous light spheres held or worn by the user can be used to realize the movement or action of the user. For tracking and positioning, the position of the light ball is the user's position or the position of the user's body part wearing the light ball, and the motion trajectory of the light ball is the user's motion trajectory or the user's body part wearing the light ball. Figure 1a is a schematic diagram of a controller equipped with light balls provided by this embodiment. As shown in Figure 1a, the controller can be equipped with light balls of different colors, and light balls of different colors can represent different users or different users. Body parts.
本实施例以用户自身保持原位不动,用户的部分身体(例如头部、眼部、手臂等)佩戴带有光球的控制器并进行不同动作为例来进行说明。当检测到光球从前一个位置点变换到后一个位置点时,说明用户佩戴有该光球的身体部位也从前一个位置点移动到后一个位置点。示例性地,用户可以头部佩戴带有光球的控制器,当用户的头部以颈部为中心进行上、下旋转时,光球也随用户头部在空间中做相应的旋转运动,检测光球的位置变化情况,就能间接检测用户头部做旋转运动时的位置变化情况;或者,用户可以眼部佩戴带有光球的控制器,当用户的眼部围绕用户头部的枕骨做左、右旋转时,光球也随用户眼部在空间中做相应的旋转运动,检测光球的位置变化情况,就能间接检测用户眼部做旋转运动时的位置变化情况;再或者,用户可以手臂佩戴带有光球的控制器,当用户的手臂围绕手肘旋转时,光球也随用户手臂在空间中做相应的旋转运动,检测光球的位置变化情况,就能间接检测用户手臂做旋转运动时的位置变化情况。In this embodiment, the user himself remains in place, and part of the user's body (for example, head, eyes, arms, etc.) wears a controller with a light ball and performs different actions as an example for description. When it is detected that the light ball changes from the previous position point to the next position point, it means that the user's body part wearing the light ball has also moved from the previous position point to the next position point. Exemplarily, the user can wear a controller with a light ball on his head. When the user's head rotates up and down with the neck as the center, the light ball also rotates in space with the user's head. Detecting the change in the position of the photosphere can indirectly detect the position change of the user's head when rotating; or, the user can wear a controller with a photosphere in the eyes, and when the user's eyes surround the occipital bone of the user's head When rotating left and right, the light ball also makes corresponding rotation movement in space with the user's eyes, and detecting the position change of the light ball can indirectly detect the position change of the user's eye when making a rotating movement; or, The user can wear a controller with a light ball on his arm. When the user's arm rotates around the elbow, the light ball will also rotate with the user's arm in the space, and the position change of the light ball can be detected indirectly. Changes in the position of the arm during a rotational movement.
在本实施例中,具体地,光球从前一个位置点移动到后一个位置点之后,光球的位置信息和姿态信息均会发生变化,为了避免利用图像识别技术定位光球位置导致的操作延时、卡顿失败等情况,本实施例中利用光球在前一个位置点的第一姿态信息来预测光球在后一个位置点的第二姿态信息,无需再次利用图像识别技术对光球在后一个位置点的位置信息进行识别处理。In this embodiment, specifically, after the light ball moves from the previous position point to the next position point, both the position information and the posture information of the light ball will change. In this embodiment, the first posture information of the light ball at the previous position is used to predict the second posture information of the light ball at the next position, and there is no need to use image recognition technology to check the light ball’s position. The position information of the latter position point is identified.
本实施例中所说的“前一个位置点”和“后一个位置点”是相邻两个位置点,可以为取自光球运动轨迹上的任意两个相邻位置点,并不局限于光球运动轨迹的起始位置点和结束位置点,例如,可以是光球运动轨迹上每间隔预设时间dt的任意两个位置点。其中,预设时间dt可以根据光球位置追踪精度的要求来进行设置,例如可以是10ms、20ms等。The "previous position point" and "next position point" mentioned in this embodiment are two adjacent position points, which can be taken from any two adjacent position points on the trajectory of the photosphere, and are not limited to The start position point and the end position point of the trajectory of the light ball, for example, may be any two position points on the trajectory of the light ball at every predetermined interval of time dt. Wherein, the preset time dt can be set according to the requirements of the tracking accuracy of the photosphere position, for example, it can be 10ms, 20ms, and so on.
根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息的方法,可以是本领域内的常规方法,例如可以是姿态解算算法。示例性 地,图1b是本实施例提供的一种光球运动轨迹示意图,如图1b所示,A点和B点是光球运动轨迹上的相邻两个位置点,A点是前一个位置点,B点是后一个位置点,光球在A点时的姿态信息为Q0,则可以根据下式Ⅰ和式Ⅱ解算光球在B点时的姿态信息Qt:According to the first posture information of the previous position point of the photosphere, the method of determining the second posture information of the next position point adjacent to the previous position point can be a conventional method in the art, for example, posture solution algorithm. Illustratively, Fig. 1b is a schematic diagram of the trajectory of a photosphere provided by this embodiment. As shown in Fig. 1b, point A and point B are two adjacent points on the trajectory of the photosphere, and point A is the previous one. The position point, point B is the latter position point, and the posture information of the light ball at point A is Q0, then the posture information Qt of the light ball at point B can be calculated according to the following equations I and II:
Qt=Qo*Δq  式Ⅰ;Qt=Qo*Δq Formula I;
Δq=ω*dt  式Ⅱ;Δq=ω*dt Formula II;
其中,ω是旋转角速度,dt是预设时间间隔。Among them, ω is the rotational angular velocity, and dt is the preset time interval.
步骤102、根据前一个位置点的第一位置信息、第一姿态信息和第二姿态信息,确定后一个位置点的第二位置信息。Step 102: Determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point.
在本实施例中,具体地,根据第一姿态信息和第二姿态信息,计算并确定光球从相邻两个位置点的前一个位置点移动到后一个位置点的位移Δl,然后根据前一个位置点的第一位置信息和位移Δl计算并确定光球在后一个位置点的第二位置信息。In this embodiment, specifically, according to the first posture information and the second posture information, the displacement Δl of the photosphere from the previous position point to the next position point of the two adjacent position points is calculated and determined, and then according to the previous position point The first position information and the displacement Δl of one position point are calculated and the second position information of the photosphere at the latter position point is determined.
本实施例的目的是利用光球在相邻两个位置点的前一个位置点处的第一位置信息和第一姿态信息,预测光球在后一个位置点处的第二位置信息,无需对光球在后一个位置点处的第二位置信息进行图像识别扫描,可以有效克服利用图像识别技术识别定位光球在后一个位置点处的第二位置信息而导致的操作延时、卡顿失败等问题。The purpose of this embodiment is to use the first position information and first posture information of the photosphere at the previous position point of the two adjacent position points to predict the second position information of the photosphere at the latter position point, without the need to correct The second position information of the photosphere at the latter position is scanned for image recognition, which can effectively overcome the operation delay and stall failure caused by the use of image recognition technology to identify the second position information of the positioning photosphere at the latter position. And other issues.
步骤103、根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。Step 103: Generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
在本实施例中,具体地,根据光球在现实空间中的第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置,其中,可以在VR显示器上输出与控制器对应的虚拟目标的当前显示位置,也可以在虚拟现实空间中输出与控制器对应的虚拟目标的当前显示位置。In this embodiment, specifically, according to the second position information of the photosphere in the real space, the current display position of the virtual target corresponding to the controller is generated and output, wherein the current display position of the virtual target corresponding to the controller can be output on the VR display. The current display position of the virtual target may also output the current display position of the virtual target corresponding to the controller in the virtual reality space.
根据第二位置信息,在VR显示器和/或虚拟现实空间中生成并输出与控制器对应的虚拟目标的当前显示位置的方法可以为本领域的常规方法,本实施例在此不再赘述。According to the second position information, the method of generating and outputting the current display position of the virtual target corresponding to the controller in the VR display and/or virtual reality space may be a conventional method in the art, which will not be repeated in this embodiment.
本实施例中,通过根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息;根据前一个位置点的第一位置信息、第一姿态信息和第二姿态信息,确定后一个位置点的第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。利用光球在前一个位置点的位置信息和姿态信息,来预测光球在后一个位置点的位置信息,有效克服了图像处理方式易受光球所处环境的背景颜色影响的问题,能够提高光球追踪定位的准确度和精度,同时,由于无需在后一个位置点处对光球进行图像采集和图像处理,避免了图像采集和图像处理过程导致的延迟、卡顿等问题,能够快速对光球进行追踪定位,提高用户与虚拟现实环境的交互速度,提升用户体验。In this embodiment, according to the first posture information of the previous position point of the photosphere, the second posture information of the next position point adjacent to the previous position point is determined; according to the first position information of the previous position point, The first posture information and the second posture information determine the second position information of the next position point; according to the second position information, the current display position of the virtual target corresponding to the controller is generated and output. Using the position information and posture information of the light ball at the previous position point to predict the position information of the light ball at the next position point, it effectively overcomes the problem that the image processing method is easily affected by the background color of the environment where the light ball is located, and can improve the light The accuracy and precision of the ball tracking and positioning. At the same time, because there is no need to perform image acquisition and image processing on the photosphere at the latter point, it avoids the delay and jamming caused by the image acquisition and image processing process, and can quickly focus on the light. The ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
图2为本申请实施例提供的又一种基于虚拟现实的控制器光球追踪方法的流程示意图,如图2所示,该方法包括:FIG. 2 is a schematic flowchart of another method for tracking a photosphere of a controller based on virtual reality according to an embodiment of the application. As shown in FIG. 2, the method includes:
步骤201、获取光球的前一个位置点的第一位置信息和第一姿态信息。Step 201: Acquire first position information and first posture information of the previous position point of the photosphere.
可以利用摄像机和图像识别技术获取光球的前一个位置点的第一位置信息,具体为:利用摄像机获取光球在前一个位置点处时的图像数据,并利用图像识别技术对获取到的图像数据进行识别处理,获取光球圆心的位置,并将光球圆心的位置转换为三维坐标,得到光球的第一位置信息。图像识别技术是本领域内的常规技术,本实施例在此不再赘述。Camera and image recognition technology can be used to obtain the first position information of the previous position of the photosphere, specifically: the camera is used to obtain the image data when the photosphere is at the previous position, and the image recognition technology is used to compare the acquired image The data is recognized and processed, the position of the center of the photosphere is obtained, and the position of the center of the photosphere is converted into three-dimensional coordinates to obtain the first position information of the photosphere. The image recognition technology is a conventional technology in the field, and will not be repeated in this embodiment.
可以利用惯性传感器IMU采集光球的前一个位置点处时的IMU数据,并对采集到的IMU数据进行处理,得到光球的第一姿态信息。其中,可以利用姿态解算算法对采集到的IMU数据进行处理,得到光球的第一姿态信息。光球的第一姿态信息至少包括旋转角速度、加速度或者偏航角。示例性地,可以利用惯性传感器IMU采集光球在前一个位置点处时的重力加速度,根据重力加速度获取旋转角速度。The inertial sensor IMU can be used to collect the IMU data at the previous position of the photosphere, and the collected IMU data can be processed to obtain the first posture information of the photosphere. Among them, the collected IMU data can be processed by using a posture calculation algorithm to obtain the first posture information of the photosphere. The first posture information of the photosphere includes at least the rotational angular velocity, acceleration, or yaw angle. Exemplarily, the inertial sensor IMU may be used to collect the gravitational acceleration when the photosphere is at the previous position point, and the rotation angular velocity can be obtained according to the gravitational acceleration.
可选地,获取光球的前一个位置点的第一位置信息,包括:获取图像,其中,图像为光球位于前一个位置点时采集单元所采集的图像;根据图像,确定光球在图像中的位置,以得到第一位置信息。其中,采集单元可以是摄像机,为了获得光球在三维空间中的位置信息,可以设置多台摄像机同时对光球进行图像采集,然后通过空间三角测量的算法,确定光球在前一个位置点的第一位置信息。在利用摄像机采集光球图像以获得光球的位置信息之前,需要利用已知位置和姿态的标记物来对摄像机的位置和姿态进行预先标定。Optionally, acquiring the first position information of the previous position of the photosphere includes: acquiring an image, where the image is the image collected by the collection unit when the photosphere is located at the previous position; and determining that the photosphere is in the image according to the image In the location to get the first location information. Among them, the acquisition unit may be a camera. In order to obtain the position information of the photosphere in the three-dimensional space, multiple cameras can be set up to collect images of the photosphere at the same time, and then the spatial triangulation algorithm is used to determine the position of the photosphere at the previous position The first location information. Before using the camera to collect the photosphere image to obtain the position information of the photosphere, the position and posture of the camera need to be calibrated in advance by using markers of known position and posture.
可选地,获取光球的前一个位置点的第一姿态信息,包括:利用陀螺仪获取光球在前一个位置点的角速度;利用加速度计获取光球在前一个位置点的加速度;利用磁力计获取光球在前一个位置点的偏航角。上述利用惯性传感器获取第一姿态信息的方法均可以是领域内的常规方法,本实施例在此不再赘述。Optionally, obtaining the first posture information of the previous position of the photosphere includes: using a gyroscope to obtain the angular velocity of the photosphere at the previous position; using an accelerometer to obtain the acceleration of the photosphere at the previous position; using magnetism The meter obtains the yaw angle of the light ball at the previous position point. The above-mentioned methods for acquiring the first posture information by using inertial sensors may all be conventional methods in the field, and details are not described herein again in this embodiment.
可选地,本实施例还包括存储获取到的第一位置信息的操作。将第一位置信息存储,以供后续步骤使用。Optionally, this embodiment further includes an operation of storing the acquired first location information. Store the first location information for use in subsequent steps.
步骤202、获取惯性测量单元所检测到的姿态数据;根据第一姿态信息、姿态数据以及预设的移动时间,确定第二姿态信息,其中,移动时间为光球从前一个位置点移动至后一个位置点所需的时间。Step 202: Obtain the posture data detected by the inertial measurement unit; determine the second posture information according to the first posture information, posture data, and preset movement time, where the movement time is the movement of the photosphere from the previous position to the next The time required for the location point.
在本实施例中,具体地,惯性测量单元包括惯性传感器,姿态数据包括以下的任意一种:旋转角速度、重力加速度、偏航角、俯仰角,本实施例以姿态数据为旋转角速度来进行说明。In this embodiment, specifically, the inertial measurement unit includes an inertial sensor, and the attitude data includes any one of the following: rotation angular velocity, gravitational acceleration, yaw angle, and pitch angle. This embodiment uses the attitude data as the rotation angular velocity for description. .
根据第一姿态信息、姿态数据以及预设的移动时间,确定第二姿态信息,包括:根据 姿态数据和移动时间,确定移动角度;根据移动角度和第一姿态信息,确定第二姿态信息。其中,移动时间是指光球从前一个位置点移动至后一个位置点所需的时间,移动时间的长短可以根据实际需要来进行设置,例如,可以根据对光球追踪定位的精度的实际需求来对移动时间进行设置,当对光球的追踪定位精度要求较高时,可以设置较短的移动时间,反之,当对光球的追踪定位精度要求较低时,可以设置较长的移动时间,一般情况下,移动时间可以设置为10ms-20ms。移动角度是指光球在移动时间内做旋转运转所移动的角度。According to the first posture information, posture data and preset movement time, determining the second posture information includes: determining the movement angle according to the posture data and the movement time; and determining the second posture information according to the movement angle and the first posture information. Among them, the movement time refers to the time required for the light ball to move from the previous position to the next position. The length of the movement time can be set according to actual needs. For example, it can be set according to the actual demand for the accuracy of the tracking and positioning of the light ball. Set the movement time. When the tracking and positioning accuracy of the light ball is required, a shorter movement time can be set. On the contrary, when the tracking and positioning accuracy of the light ball is low, a longer movement time can be set. In general, the moving time can be set to 10ms-20ms. The moving angle refers to the angle that the light ball moves in a rotating motion within the moving time.
示例性地,本实施例以姿态数据为旋转角速度ω来进行说明,移动时间预设为dt,则移动角度为Δq=ω*dt;假设光球在前一个位置点的第一姿态信息为Q0,则光球在后一个位置点的第二姿态信息Qt为Qt=Qo*Δq。Exemplarily, this embodiment uses the posture data as the rotational angular velocity ω for description, and the movement time is preset to dt, and the movement angle is Δq=ω*dt; assuming that the first posture information of the photosphere at the previous position is Q0 , Then the second posture information Qt of the photosphere at the latter position point is Qt=Qo*Δq.
步骤203、根据第一姿态信息,确定光球位于前一个位置点时的第一预测位置,其中,第一预测位置表征光球位于前一个位置点时,相对于初始位置点的位置。Step 203: Determine the first predicted position when the photosphere is located at the previous position point according to the first posture information, where the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous position point.
在本实施例中,具体地,根据第一姿态信息,确定光球位于前一个位置点时的第一预测位置,包括:根据第一姿态信息和预设的骨关节模型,确定第一预测位置,其中,骨关节模型用于指示人体关节的移动关系。In this embodiment, specifically, determining the first predicted position when the photosphere is located at the previous position point according to the first posture information includes: determining the first predicted position according to the first posture information and a preset bone and joint model , Among them, the bone joint model is used to indicate the movement relationship of the human joints.
具体地,骨关节模型用于指示人体关节的位置或移动轨迹随时间的变化情况,当人体关节处佩戴有光球时,骨关节模型也可用于指示光球的位置或移动轨迹随时间的变化情况。骨关节模型中包括预设的移动半径;根据第一姿态信息和预设的骨关节模型,确定第一预测位置,包括:根据第一姿态信息、移动半径、预设的第一移动时间,确定第一预测位置,其中,第一移动时间为光球从初始位置点移动至前一个位置点所需的时间。Specifically, the bone joint model is used to indicate the changes in the position or movement trajectory of the human joints over time. When a photosphere is worn at the human joints, the bone joint model can also be used to indicate the change in the position or movement trajectory of the photosphere over time. Condition. The bone joint model includes a preset moving radius; determining the first predicted position according to the first posture information and the preset bone joint model includes: determining according to the first posture information, the moving radius, and the preset first moving time The first predicted position, where the first movement time is the time required for the photosphere to move from the initial position point to the previous position point.
本实施例中的骨关节模型与人体关节位置相适应,不同的人体关节对应不同的骨关节模型。示例性地,本实施例中的骨关节模型包括头部模型、眼部模型和手臂模型。本实施例以二维平面xoy坐标系内的人体头部、眼部和手臂为例来对骨关节模型进行举例说明。The bone joint model in this embodiment is adapted to the position of the human body joint, and different human joints correspond to different bone joint models. Illustratively, the bone and joint models in this embodiment include a head model, an eye model, and an arm model. In this embodiment, the human head, eyes, and arms in a two-dimensional plane xoy coordinate system are taken as examples to illustrate the bone joint model.
图2a为本实施例提供的一种人体头部以颈部为中心做上下旋转运动的示意图,如图2a所示,O 1点表示人体颈部的位置,L点、M点和N点表示人体头部的位置;人体头部以旋转角速度ω 1从L点经M点旋转到N点,L点为旋转运动的起始位置点,M点和N点分别为相邻两个位置点中的前一个位置点前一个位置点和后一个位置点;人体头部与人体颈部之间的距离r 1为旋转运动轨迹的半径。利用本实施例的方法确定光球在M点时的第一预测位置M(x M,y M): Figure 2a is a schematic diagram of a human head doing up and down rotation with the neck as the center provided by this embodiment. As shown in Figure 2a, the O 1 point represents the position of the human neck, and the L point, M point and N point represent The position of the human head; the human head rotates from point L to point N through point M at a rotational angular velocity ω 1 , point L is the starting position of the rotation, point M and point N are two adjacent positions respectively The previous position point is the previous position point and the next position point; the distance r 1 between the human head and the human neck is the radius of the rotational motion trajectory. Use the method of this embodiment to determine the first predicted position M(x M , y M ) when the photosphere is at point M:
θ M=ω 1*dt 1  式(1) θ M =ω 1 *dt 1 formula (1)
x M=r 1*sinθ M=r 1*sin(ω 1*dt 1)  式(2) x M =r 1 *sinθ M =r 1 *sin(ω 1 *dt 1 ) Equation (2)
y M=r 1*cosθ M=r 1*cos(ω 1*dt 1)  式(3) y M =r 1 *cosθ M =r 1 *cos(ω 1 *dt 1 ) Equation (3)
上述式(1)-式(3)即为本实施例的头部模型,其中,ω 1为人体头部在M点时的第一姿态信息,r 1为移动半径,dt 1为预设的第一移动时间。 The above equations (1)-(3) are the head model of this embodiment, where ω 1 is the first posture information of the human head at point M, r 1 is the moving radius, and dt 1 is the preset The first move time.
图2b为本实施例提供的一种人体眼部以头部枕骨为中心做左右旋转运动的示意图,如图2b所示,O 2点表示头部枕骨的位置,F点、G点和H点表示人体眼部的位置;人体眼部以旋转角速度ω 2从F点经G点旋转到H点,F点为旋转运动的起始位置点,G点和H电分别为相邻两个位置点中的前一个位置点和后一个位置点;人体眼部与头部枕骨之间的距离r 2为旋转运动轨迹的半径。利用本实施例的方法确定光球在G点时的第一预测位置G(x G,y G): Figure 2b is a schematic diagram of a human eye doing left and right rotation around the occipital bone of the head provided by this embodiment. As shown in Figure 2b, the O 2 point represents the position of the occipital bone of the head, point F, point G and point H Represents the position of the human eye; the human eye rotates from point F through point G to point H at an angular velocity of rotation ω 2 , point F is the starting position of the rotational movement, point G and H are two adjacent positions respectively The previous location point and the next location point in, the distance r 2 between the eyes of the human body and the occipital bone of the head is the radius of the rotational motion trajectory. Use the method of this embodiment to determine the first predicted position G(x G , y G ) when the photosphere is at point G:
θ G=ω 2*dt 2  式(4) θ G =ω 2 *dt 2 formula (4)
Figure PCTCN2021081910-appb-000001
Figure PCTCN2021081910-appb-000001
Figure PCTCN2021081910-appb-000002
Figure PCTCN2021081910-appb-000002
上述式(4)-式(5)即为本实施例的眼部模型,其中,ω 2为人体眼部在G点时的第一姿态信息,r 2为移动半径,dt 2为预设的第一移动时间。 The above equations (4)-(5) are the eye model of this embodiment, where ω 2 is the first posture information of the human eye at point G, r 2 is the moving radius, and dt 2 is the preset The first move time.
图2c为本实施例提供的一种人体手臂以手肘为中心做旋转运动的示意图,如图2c所示,O 3点表示手肘的位置,C点、D点和E点表示人体手臂的为位置;人体手臂以旋转角速度ω 3从C点经D点旋转到E点,C点为旋转运动的起始位置点,D点和E点分别为相邻两个位置点中的前一个位置点和后一个位置点;人体手臂与手肘之间的距离r 3为旋转运动轨迹的半径。利用本实施例的方法确定光球在D点时的第一预测位置D(x D,y D): Figure 2c is a schematic diagram of a human arm performing a rotational movement centered on the elbow provided by this embodiment. As shown in Figure 2c, point O 3 represents the position of the elbow, and point C, D and E represent the position of the human arm. Is the position; the human arm rotates from point C through point D to point E at an angular velocity of rotation ω 3 , point C is the starting position of the rotation movement, point D and point E are the previous positions of the two adjacent positions respectively Point and the next position point; the distance r 3 between the human arm and the elbow is the radius of the rotational motion trajectory. Use the method of this embodiment to determine the first predicted position D(x D , y D ) when the photosphere is at point D:
θ D=ω 3*dt 3  式(7) θ D3 *dt 3 formula (7)
x D=r 3*sin(θ D+β)=r 3*sin(ω 3*dt 3+β)  式(8) x D =r 3 *sin(θ D +β)=r 3 *sin(ω 3 *dt 3 +β) Equation (8)
y D=r 3*cos(θ D+β)=r 3*cos(ω 3*dt 3+β)  式(9) y D =r 3 *cos(θ D +β)=r 3 *cos(ω 3 *dt 3 +β) Equation (9)
上述式(7)-式(9)即为本实施例的手臂模型,其中,ω 3为人体手臂在D点时的第一姿态信息,r 3为移动半径,dt 3为预设的第一移动时间,β为起始位置点C和手肘位置O 3的连线与竖直方向的夹角。 The above equations (7)-(9) are the arm model of this embodiment, where ω 3 is the first posture information of the human arm at point D, r 3 is the moving radius, and dt 3 is the preset first posture information. The moving time, β is the angle between the line between the starting position point C and the elbow position O 3 and the vertical direction.
上述式(1)-式(3)、式(4)-式(5)和式(7)-式(9)仅是对二维平面xoy坐标系内的人体头部、眼部和手臂的骨关节模型进行举例说明,利用本实施例中上述的方法还可以确定人体其它部位的骨关节模型,例如人体手腕部位的手腕模型等,本实施例在此不再赘述。The above equations (1)-equation (3), equation (4)-equation (5) and equation (7)-equation (9) are only for the human head, eyes and arms in the two-dimensional plane xoy coordinate system. The bone and joint model is illustrated as an example. The above method in this embodiment can also be used to determine the bone and joint models of other parts of the human body, such as the wrist model of the human wrist, which will not be repeated in this embodiment.
对于三维空间xoyz坐标系内的人体关节的骨关节模型,可以先将人体关节在三维空间xoyz坐标系内的位置拆解成二维平面xoy坐标系、xoz坐标系和yoz坐标系内的位置,并利用上述方法分别确定人体关节在上述三个二维平面内的骨关节模型,再对三个骨关节模型进行合并,即可得到人体关节在三维空间xoyz坐标系内的骨关节模型。本实施例中通过式(10)来综合表示人体关节的骨关节模型:For the bone joint model of the human joint in the three-dimensional xoyz coordinate system, the position of the human joint in the three-dimensional xoyz coordinate system can be disassembled into the position in the two-dimensional plane xoy coordinate system, xoz coordinate system, and yoz coordinate system. The above method is used to determine the bone joint models of the human joints in the above three two-dimensional planes, and then the three bone joint models are combined to obtain the bone joint models of the human joints in the three-dimensional xoyz coordinate system. In this embodiment, formula (10) is used to comprehensively express the bone joint model of the human joint:
p=f(q)=q*(0,ln,0)*q -1  式(10) p=f(q)=q*(0,ln,0)*q -1 formula (10)
其中,p为人体关节的位置信息,q为人体关节在某一位置点处时的姿态信息,q -1为q四元数形式的逆,ln为人体关节的移动半径长度。 Among them, p is the position information of the human joints, q is the posture information of the human joints at a certain position, q -1 is the inverse of q quaternion form, and ln is the moving radius of the human joints.
步骤204、根据第二姿态信息,确定光球位于后一个位置点时的第二预测位置,其中,第二预测位置表征光球位于后一个位置点时,相对于初始位置点的位置。Step 204: Determine, according to the second posture information, a second predicted position when the light ball is located at the next position point, where the second predicted position represents the position of the light ball relative to the initial position point when the light ball is located at the next position point.
在本实施例中,具体地,根据第二姿态信息,确定光球位于后一个位置点时的第二预测位置,包括:根据第二姿态信息和骨关节模型,确定第二预测位置。In this embodiment, specifically, determining the second predicted position when the photosphere is at the next position point according to the second posture information includes: determining the second predicted position according to the second posture information and the bone and joint model.
根据第二姿态信息和骨关节模型,确定第二预测位置,包括:根据第二姿态信息、移动半径、预设的第二移动时间,确定第二预测位置,其中,第二移动时间为光球从初始位置点移动至后一个位置点所需的时间。Determining the second predicted position according to the second posture information and the bone joint model includes: determining the second predicted position according to the second posture information, the moving radius, and the preset second moving time, where the second moving time is a photosphere The time required to move from the initial point to the next point.
对于同一个人体关节来说,骨关节模型并不会随着人体关节的移动而发生改变,也就是说,确定光球在相邻两个位置点处的第一预测模型和第二预测模型时所使用的骨关节模型相同。For the same human joint, the bone joint model does not change with the movement of the human joint, that is to say, when determining the first prediction model and the second prediction model of the photosphere at two adjacent positions The bone and joint model used is the same.
步骤204的方法和原理与步骤203的方法和原理相似或相同,参见步骤203的相关记载,此处不再赘述。The method and principle of step 204 are similar to or the same as the method and principle of step 203, please refer to the related record of step 203, which will not be repeated here.
步骤205、根据第二预测位置和第一预测位置,确定光球的移动位移,其中,移动位移表征光球从前一个位置点移动至后一个位置点的位移。Step 205: Determine the movement displacement of the light ball according to the second predicted position and the first predicted position, where the movement displacement represents the displacement of the light ball from the previous position point to the next position point.
在本实施例中,具体地,根据第二预测位置和第一预测位置,在空间坐标系中计算第二预测位置和第一预测位置之间的距离,即为光球从前一个位置点移动至后一个位置点的位移。In this embodiment, specifically, according to the second predicted position and the first predicted position, the distance between the second predicted position and the first predicted position is calculated in the spatial coordinate system, that is, the light sphere moves from the previous position to the The displacement of the latter position point.
示例性地,图2d是本实施例提供的光球由J点移动到K点的示意图,如图2d所示,J点和K点分别是相邻两个位置点中的前一个位置点和后一个位置点,利用本实施例的方法计算光球从J点移动至K点的位移:Illustratively, Fig. 2d is a schematic diagram of the light ball moving from point J to point K provided by this embodiment. As shown in Fig. 2d, point J and point K are the previous one and For the latter position point, use the method of this embodiment to calculate the displacement of the photosphere from point J to point K:
通过式(10)确定光球分别在J点和K点时的第一预测位置p J和第二预测位置p K为: The first predicted position p J and the second predicted position p K when the photosphere is at point J and point K are determined by formula (10) as:
p J=f(q J)=q J*(0,ln,0)*q J -1 p J = f(q J ) = q J *(0,ln,0)*q J -1
p K=f(q K)=q K*(0,ln,0)*q K -1 p K = f(q K ) = q K *(0,ln,0)*q K -1
则,光球从J点移动至K点的位移Δl为:Then, the displacement Δl of the photosphere from point J to point K is:
Δl=p K-p J=q K*(0,ln,0)*q K -1-q J*(0,ln,0)*q J -1 Δl=p K -p J =q K *(0,ln,0)*q K -1 -q J *(0,ln,0)*q J -1
上述方法仅用于对本实施例的解释说明,并不用于限制本申请,本申请还可采用其它方法确定光球的移动位移,本实施例不再赘述。The above-mentioned method is only used for the explanation of this embodiment, and is not used to limit this application. This application may also use other methods to determine the movement and displacement of the photosphere, which will not be repeated in this embodiment.
步骤206、根据移动位移和第一位置信息,确定第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。Step 206: Determine second position information according to the movement displacement and the first position information; according to the second position information, generate and output the current display position of the virtual target corresponding to the controller.
在本实施例中,具体地,第一位置信息是光球在前一个位置点时的真实位置信息,将光球在前一个位置点的第一位置信息与移动位移综合叠加,即可得到光球在后一个位置点的第二位置信息。In this embodiment, specifically, the first position information is the real position information of the light ball at the previous position point. The first position information of the light ball at the previous position point and the movement displacement are comprehensively superimposed to obtain the light The second position information of the ball in the latter position.
示例性地,在图2d中,假设光球在J点时的第一位置信息为p,则利用本实施例的方法确定光球在K点时的第二位置信息p t为: Exemplarily, in Figure 2d, assuming that the first position information of the photosphere at point J is p, the method of this embodiment is used to determine the second position information p t of the photosphere at point K as:
p t=p+Δl=p+q K*(0,ln,0)*q K -1-q J*(0,ln,0)*q J -1 p t =p+Δl=p+q K *(0,ln,0)*q K -1 -q J *(0,ln,0)*q J -1
上述方法仅用于对本实施例的解释说明,并不用于限制本申请,本申请还可采用其它方法确定光球的第二位置信息,本实施例不再赘述。The above-mentioned method is only used for the explanation of this embodiment, and is not used to limit this application. This application may also use other methods to determine the second position information of the photosphere, which will not be repeated in this embodiment.
可选地,在根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置之前,还包括:根据预存的光球的历史位置点的位置信息,对第二位置信息进行平滑处理,得到平滑后的第二位置信息。Optionally, before generating and outputting the current display position of the virtual target corresponding to the controller according to the second position information, the method further includes: smoothing the second position information according to the pre-stored position information of the historical position point of the photosphere After processing, the smoothed second position information is obtained.
对第二位置信息进行平滑处理,可以减小图像的噪声或者失真,本实施例对第二位置信息进行平滑处理的方法可以是本领域内的常规方法,例如可以是均值滤波法、中值滤波法、高斯滤波法或者双边滤波法等。Smoothing the second position information can reduce the noise or distortion of the image. The method of smoothing the second position information in this embodiment may be a conventional method in the art, such as mean filtering and median filtering. Method, Gaussian filtering method or bilateral filtering method, etc.
可选地,本申请的方法还包括:根据第二位置信息和第二姿态信息,生成并输出光球的当前位姿信息。其中,位姿信息包括位置信息和姿态信息,生成并输出光球的当前位姿信息,以便后续对光球继续进行追踪定位时引用该位姿信息。Optionally, the method of the present application further includes: generating and outputting the current pose information of the photosphere according to the second position information and the second posture information. Among them, the pose information includes position information and pose information, and the current pose information of the photosphere is generated and output, so that the pose information can be cited when the photosphere is continuously tracked and positioned.
本实施例中,通过获取光球的前一个位置点的第一位置信息和第一姿态信息;获取惯性测量单元所检测到的姿态数据;根据第一姿态信息、姿态数据以及预设的移动时间,确定第二姿态信息,其中,移动时间为光球从前一个位置点移动至后一个位置点所需的时间;根据第一姿态信息,确定光球位于前一个位置点时的第一预测位置,其中,第一预测位置表征光球位于前一个位置点时,相对于初始位置点的位置;根据第二姿态信息,确定光球位于后一个位置点时的第二预测位置,其中,第二预测位置表征光球位于后一个位置点时,相对于初始位置点的位置;根据第二预测位置和第一预测位置,确定光球的移动位移,其中,移动位移表征光球从前一个位置点移动至后一个位置点的位移;根据移动位移和第一 位置信息,确定第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。利用光球在前一个位置点的位置信息和姿态信息,来预测光球在后一个位置点的位置信息,有效克服了图像处理方式易受光球所处环境的背景颜色影响的问题,能够提高光球追踪定位的准确度和精度,同时,由于无需在后一个位置点处对光球进行图像采集和图像处理,避免了图像采集和图像处理过程导致的延迟、卡顿等问题,能够快速对光球进行追踪定位,提高用户与虚拟现实环境的交互速度,提升用户体验;进一步地,利用光球在前一个位置点处的第一预测位置和后一个位置点处的第二预测位置,来确定光球从前一个位置点移动自后一个位置点的移动位移,再根据实际测得的光球在前一个位置点的第一位置信息和该移动位移,确定光球在后一个位置点的第二位置信息,能进一步提高光球追踪定位的准确度和精度。In this embodiment, the first position information and the first posture information of the previous position of the photosphere are acquired; the posture data detected by the inertial measurement unit is obtained; according to the first posture information, the posture data, and the preset moving time , Determine the second posture information, where the movement time is the time required for the light ball to move from the previous position point to the next position point; according to the first posture information, determine the first predicted position when the light ball is at the previous position point, Among them, the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous location point; according to the second posture information, the second predicted position when the photosphere is located at the next location point is determined, where the second prediction The position characterizes the position of the light ball relative to the initial position point when it is located at the next position point; according to the second predicted position and the first predicted position, the movement displacement of the light ball is determined, where the movement displacement characterizes the movement of the light ball from the previous position point to The displacement of the latter position point; the second position information is determined according to the movement displacement and the first position information; the current display position of the virtual target corresponding to the controller is generated and output according to the second position information. Using the position information and posture information of the light ball at the previous position to predict the position information of the light ball at the next position, it effectively overcomes the problem that the image processing method is easily affected by the background color of the environment where the light ball is located, and can improve the light The accuracy and precision of the ball tracking and positioning. At the same time, because there is no need to perform image acquisition and image processing on the photosphere at the latter position, it avoids problems such as delays and freezes caused by the image acquisition and image processing process, and can quickly focus on the light. The ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and improves the user experience; further, the first predicted position of the light ball at the previous location point and the second predicted location at the next location point are used to determine The movement displacement of the light ball from the previous position point to the next position point, and then according to the actually measured first position information of the light ball at the previous position point and the movement displacement, determine the second position of the light ball at the next position point. The location information can further improve the accuracy and precision of the photosphere tracking and positioning.
图3为本申请实施例提供的一种基于虚拟现实的控制器光球追踪装置的结构示意图,如图3所示,该装置包括:Fig. 3 is a schematic structural diagram of a virtual reality-based controller photosphere tracking device provided by an embodiment of the application. As shown in Fig. 3, the device includes:
第一处理单元1,用于根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息;The first processing unit 1 is configured to determine the second posture information of the next location point adjacent to the previous location point according to the first posture information of the previous location point of the photosphere;
第二处理单元2,用于根据前一个位置点的第一位置信息、第一姿态信息和第二姿态信息,确定后一个位置点的第二位置信息;The second processing unit 2 is configured to determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point;
第三处理单元3,用于根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。The third processing unit 3 is configured to generate and output the current display position of the virtual target corresponding to the controller according to the second position information.
本实施例中,通过根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息;根据前一个位置点的第一位置信息、第一姿态信息和第二姿态信息,确定后一个位置点的第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。利用光球在前一个位置点的位置信息和姿态信息,来预测光球在后一个位置点的位置信息,有效克服了图像处理方式易受光球所处环境的背景颜色影响的问题,能够提高光球追踪定位的准确度和精度,同时,由于无需在后一个位置点处对光球进行图像采集和图像处理,避免了图像采集和图像处理过程导致的延迟、卡顿等问题,能够快速对光球进行追踪定位,提高用户与虚拟现实环境的交互速度,提升用户体验。In this embodiment, according to the first posture information of the previous position point of the photosphere, the second posture information of the next position point adjacent to the previous position point is determined; according to the first position information of the previous position point, The first posture information and the second posture information determine the second position information of the next position point; according to the second position information, the current display position of the virtual target corresponding to the controller is generated and output. Using the position information and posture information of the light ball at the previous position point to predict the position information of the light ball at the next position point, it effectively overcomes the problem that the image processing method is easily affected by the background color of the environment where the light ball is located, and can improve the light The accuracy and precision of the ball tracking and positioning. At the same time, because there is no need to perform image acquisition and image processing on the photosphere at the latter point, it avoids the delay and jamming caused by the image acquisition and image processing process, and can quickly focus on the light. The ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and enhances the user experience.
图4为本申请实施例提供的又一种基于虚拟现实的控制器光球追踪装置的结构示意图,在图3的基础上,如图4所示:Fig. 4 is a schematic structural diagram of another virtual reality-based controller photosphere tracking device provided by an embodiment of the application. On the basis of Fig. 3, as shown in Fig. 4:
第二处理单元2,包括:The second processing unit 2 includes:
第一处理子单元21,用于根据第一姿态信息,确定光球位于前一个位置点时的第一预测位置,其中,第一预测位置表征光球位于前一个位置点时,相对于初始位置点的位置;The first processing subunit 21 is configured to determine, according to the first posture information, a first predicted position when the light ball is located at the previous position point, where the first predicted position represents that when the light ball is located at the previous position point, relative to the initial position Point location
第二处理子单元22,用于根据第二姿态信息,确定光球位于后一个位置点时的第二预测位置,其中,第二预测位置表征光球位于后一个位置点时,相对于初始位置点的位置;The second processing subunit 22 is configured to determine a second predicted position when the light ball is located at the next position point according to the second posture information, where the second predicted position represents that when the light ball is located at the next position point, relative to the initial position Point location
第三处理子单元23,用于根据第二预测位置和第一预测位置,确定光球的移动位移,其中,移动位移表征光球从前一个位置点移动至后一个位置点的位移;The third processing subunit 23 is configured to determine the movement displacement of the photosphere according to the second predicted position and the first predicted position, where the movement displacement represents the displacement of the photosphere from the previous position point to the next position point;
第四处理子单元24,用于根据移动位移和第一位置信息,确定第二位置信息。The fourth processing subunit 24 is configured to determine the second position information according to the movement displacement and the first position information.
第一处理子单元21,包括:The first processing subunit 21 includes:
第一处理模块211,用于根据第一姿态信息和预设的骨关节模型,确定第一预测位置,其中,骨关节模型用于指示人体关节的移动关系;The first processing module 211 is configured to determine a first predicted position according to the first posture information and a preset bone joint model, where the bone joint model is used to indicate the movement relationship of the human joints;
第二处理子单元22,包括:The second processing subunit 22 includes:
第二处理模块221,用于根据第二姿态信息和骨关节模型,确定第二预测位置。The second processing module 221 is configured to determine the second predicted position according to the second posture information and the bone joint model.
其中,骨关节模型中包括预设的移动半径;第一处理模块211,包括:Wherein, the bone joint model includes a preset moving radius; the first processing module 211 includes:
第一处理子模块2111,用于根据第一姿态信息、移动半径、预设的第一移动时间,确定第一预测位置,其中,第一移动时间为光球从初始位置点移动至前一个位置点所需的时间;The first processing sub-module 2111 is used to determine the first predicted position according to the first posture information, the movement radius, and the preset first movement time, where the first movement time is the movement of the photosphere from the initial position point to the previous position Point required time;
第二处理模块221,包括:The second processing module 221 includes:
第二处理子模块2211,用于根据第二姿态信息、移动半径、预设的第二移动时间,确定第二预测位置,其中,第二移动时间为光球从初始位置点移动至后一个位置点所需的时间。The second processing sub-module 2211 is used to determine the second predicted position according to the second posture information, the movement radius, and the preset second movement time, where the second movement time is the movement of the photosphere from the initial position point to the next position Point the time required.
第一处理单元1,包括:The first processing unit 1, including:
第五处理子单元11,用于获取惯性测量单元所检测到的姿态数据;The fifth processing subunit 11 is used to obtain the posture data detected by the inertial measurement unit;
第六处理子单元12,用于根据第一姿态信息、姿态数据以及预设的移动时间,确定第二姿态信息,其中,移动时间为光球从前一个位置点移动至后一个位置点所需的时间。The sixth processing subunit 12 is used to determine the second posture information according to the first posture information, posture data, and preset movement time, where the movement time is required for the photosphere to move from the previous position point to the next position point time.
第六处理子单元12,包括:The sixth processing subunit 12 includes:
第三处理模块121,用于根据姿态数据和移动时间,确定移动角度;The third processing module 121 is configured to determine the movement angle according to the posture data and the movement time;
第四处理模块122,用于根据移动角度和第一姿态信息,确定第二姿态信息。The fourth processing module 122 is configured to determine the second posture information according to the movement angle and the first posture information.
其中,姿态数据为以下的任意一种:旋转角速度、重力加速度、偏航角、俯仰角。Among them, the attitude data is any one of the following: rotation angular velocity, gravitational acceleration, yaw angle, and pitch angle.
该装置还包括获取单元4,用于在第一处理单元1根据光球的前一个位置点的第一姿态信息,确定与前一个位置点相邻的后一个位置点的第二姿态信息之前,获取光球的前一 个位置点的第一位置信息和第一姿态信息;The device also includes an acquiring unit 4, which is used for before the first processing unit 1 determines the second posture information of the next location point adjacent to the previous location point according to the first posture information of the previous location point of the photosphere, Acquiring the first position information and the first posture information of the previous position point of the photosphere;
其中,获取单元4包括:Among them, the obtaining unit 4 includes:
获取子单元41,用于获取图像,其中,图像为光球位于前一个位置点时采集单元所采集的图像;The acquiring subunit 41 is used to acquire an image, where the image is an image acquired by the acquisition unit when the photosphere is located at a previous position;
第七处理子单元42,用于根据图像,确定光球在图像中的位置,以得到第一位置信息。The seventh processing subunit 42 is used to determine the position of the light ball in the image according to the image to obtain the first position information.
该装置还包括:The device also includes:
第四处理单元5,用于在第三处理单元3根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置之前,根据预存的光球的历史位置点的位置信息,对第二位置信息进行平滑处理,得到平滑后的第二位置信息。The fourth processing unit 5 is configured to perform a correction based on the pre-stored position information of the historical position of the photosphere before the third processing unit 3 generates and outputs the current display position of the virtual target corresponding to the controller according to the second position information. Smoothing is performed on the second position information to obtain smoothed second position information.
该装置还包括:The device also includes:
第五处理单元6,用于根据第二位置信息和第二姿态信息,生成并输出光球的当前位姿信息。The fifth processing unit 6 is configured to generate and output the current pose information of the photosphere according to the second position information and the second posture information.
本实施例中,通过获取光球的前一个位置点的第一位置信息和第一姿态信息;获取惯性测量单元所检测到的姿态数据;根据第一姿态信息、姿态数据以及预设的移动时间,确定第二姿态信息,其中,移动时间为光球从前一个位置点移动至后一个位置点所需的时间;根据第一姿态信息,确定光球位于前一个位置点时的第一预测位置,其中,第一预测位置表征光球位于前一个位置点时,相对于初始位置点的位置;根据第二姿态信息,确定光球位于后一个位置点时的第二预测位置,其中,第二预测位置表征光球位于后一个位置点时,相对于初始位置点的位置;根据第二预测位置和第一预测位置,确定光球的移动位移,其中,移动位移表征光球从前一个位置点移动至后一个位置点的位移;根据移动位移和第一位置信息,确定第二位置信息;根据第二位置信息,生成并输出与控制器对应的虚拟目标的当前显示位置。利用光球在前一个位置点的位置信息和姿态信息,来预测光球在后一个位置点的位置信息,有效克服了图像处理方式易受光球所处环境的背景颜色影响的问题,能够提高光球追踪定位的准确度和精度,同时,由于无需在后一个位置点处对光球进行图像采集和图像处理,避免了图像采集和图像处理过程导致的延迟、卡顿等问题,能够快速对光球进行追踪定位,提高用户与虚拟现实环境的交互速度,提升用户体验;进一步地,利用光球在前一个位置点处的第一预测位置和后一个位置点处的第二预测位置,来确定光球从前一个位置点移动自后一个位置点的移动位移,再根据实际测得的光球在前一个位置点的第一位置信息和该移动位移,确定光球在后一个位置点的第二位置信息,能进一步提高光球追踪定位的准确度和精度。In this embodiment, the first position information and the first posture information of the previous position of the photosphere are acquired; the posture data detected by the inertial measurement unit is obtained; according to the first posture information, the posture data, and the preset moving time , Determine the second posture information, where the movement time is the time required for the light ball to move from the previous position point to the next position point; according to the first posture information, determine the first predicted position when the light ball is at the previous position point, Among them, the first predicted position represents the position of the photosphere relative to the initial position point when the photosphere is located at the previous location point; according to the second posture information, the second predicted position when the photosphere is located at the next location point is determined, where the second prediction The position characterizes the position of the light ball relative to the initial position point when it is located at the next position point; according to the second predicted position and the first predicted position, the movement displacement of the light ball is determined, where the movement displacement characterizes the movement of the light ball from the previous position point to The displacement of the latter position point; the second position information is determined according to the movement displacement and the first position information; the current display position of the virtual target corresponding to the controller is generated and output according to the second position information. Using the position information and posture information of the light ball at the previous position to predict the position information of the light ball at the next position, it effectively overcomes the problem that the image processing method is easily affected by the background color of the environment where the light ball is located, and can improve the light The accuracy and precision of the ball tracking and positioning. At the same time, because there is no need to perform image acquisition and image processing on the photosphere at the latter position, it avoids problems such as delays and freezes caused by the image acquisition and image processing process, and can quickly focus on the light. The ball is used for tracking and positioning, which improves the interaction speed between the user and the virtual reality environment and improves the user experience; further, the first predicted position of the light ball at the previous location point and the second predicted location at the next location point are used to determine The movement displacement of the light ball from the previous position point to the next position point, and then according to the actually measured first position information of the light ball at the previous position point and the movement displacement, determine the second position of the light ball at the next position point. The location information can further improve the accuracy and precision of the photosphere tracking and positioning.
根据本申请的实施例,本申请还提供了一种电子设备和一种可读存储介质。According to the embodiments of the present application, the present application also provides an electronic device and a readable storage medium.
如图5所示,是根据本申请实施例的基于虚拟现实的控制器光球追踪方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。As shown in FIG. 5, it is a block diagram of an electronic device based on a virtual reality-based controller photosphere tracking method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the application described and/or required herein.
如图5所示,该电子设备包括:一个或多个处理器501、存储器502,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图5中以一个处理器501为例。As shown in FIG. 5, the electronic device includes: one or more processors 501, a memory 502, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. The various components are connected to each other using different buses, and can be installed on a common motherboard or installed in other ways as needed. The processor may process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, if necessary, multiple processors and/or multiple buses can be used with multiple memories and multiple memories. Similarly, multiple electronic devices can be connected, and each device provides part of the necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system). In FIG. 5, a processor 501 is taken as an example.
存储器502即为本申请所提供的非瞬时计算机可读存储介质。其中,存储器存储有可由至少一个处理器执行的指令,以使至少一个处理器执行本申请所提供的基于虚拟现实的控制器光球追踪方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的基于虚拟现实的控制器光球追踪方法。The memory 502 is a non-transitory computer-readable storage medium provided by this application. The memory stores instructions that can be executed by at least one processor, so that the at least one processor executes the virtual reality-based controller photosphere tracking method provided in the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to make the computer execute the virtual reality-based controller photosphere tracking method provided by the present application.
存储器502作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的基于虚拟现实的控制器光球追踪方法对应的程序指令/模块(例如,附图3所示的获取单元1、第一处理单元2和第二处理单元3)。处理器501通过运行存储在存储器502中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的基于虚拟现实的控制器光球追踪方法。As a non-transitory computer-readable storage medium, the memory 502 can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the program corresponding to the virtual reality-based controller photosphere tracking method in the embodiment of the present application Instructions/modules (for example, the acquisition unit 1, the first processing unit 2, and the second processing unit 3 shown in FIG. 3). The processor 501 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 502, that is, realizing the virtual reality-based controller photosphere tracking method in the above method embodiment .
存储器502可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据基于虚拟现实的光球追踪的电子设备的使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器502可选包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至基于虚拟现实的光球追踪的电子设备。上述网络的实例包括但不 限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 502 may include a storage program area and a storage data area. The storage program area may store an operating system and an application program required by at least one function; Created data, etc. In addition, the memory 502 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some embodiments, the memory 502 may optionally include memories remotely provided with respect to the processor 501, and these remote memories may be connected to an electronic device based on virtual reality-based photosphere tracking via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
基于虚拟现实的光球追踪的方法的电子设备还可以包括:输入装置503和输出装置504。处理器501、存储器502、输入装置503和输出装置504可以通过总线或者其他方式连接,图5中以通过总线连接为例。The electronic device based on the virtual reality-based photosphere tracking method may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503, and the output device 504 may be connected by a bus or in other ways. In FIG. 5, the connection by a bus is taken as an example.
输入装置503可接收输入的数字或字符信息,以及产生与基于虚拟现实的光球追踪的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置504可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。The input device 503 can receive input digital or character information, and generate key signal input related to the user settings and function control of the electronic device based on virtual reality photosphere tracking, such as touch screen, keypad, mouse, track pad, touch pad , Pointing stick, one or more mouse buttons, trackball, joystick and other input devices. The output device 504 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, the programmable processor It can be a dedicated or general-purpose programmable processor that can receive data and instructions from the storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. An output device.
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。These calculation programs (also referred to as programs, software, software applications, or codes) include machine instructions for programmable processors, and can be implemented using high-level procedures and/or object-oriented programming languages, and/or assembly/machine language Calculation program. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, device, and/or device used to provide machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memory, programmable logic devices (PLD)), including machine-readable media that receive machine instructions as machine-readable signals. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。In order to provide interaction with the user, the systems and techniques described here can be implemented on a computer that has: a display device for displaying information to the user (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) ); and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user can provide input to the computer. Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, voice input, or tactile input) to receive input from the user.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务 器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and technologies described herein can be implemented in a computing system that includes back-end components (for example, as a data server), or a computing system that includes middleware components (for example, an application server), or a computing system that includes front-end components (for example, A user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the system and technology described herein), or includes such back-end components, middleware components, Or any combination of front-end components in a computing system. The components of the system can be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。The computer system can include clients and servers. The client and server are generally far away from each other and usually interact through a communication network. The relationship between the client and the server is generated by computer programs that run on the corresponding computers and have a client-server relationship with each other.
相关技术中,控制器中携带有惯性测量单元(Inertial measurement unit,简称IMU),IMU能够测量控制器在三维空间中的角速度和加速度,并以此解算出控制器的姿态,实现三自由度(Three Degrees of Freedom,简称3DOF)追踪。In related technologies, the controller carries an inertial measurement unit (IMU). The IMU can measure the angular velocity and acceleration of the controller in a three-dimensional space, and use this to calculate the controller’s attitude to achieve three degrees of freedom ( Three Degrees of Freedom, 3DOF for short) tracking.
然而,上述技术中,无法测量控制器的位置,进而无法获得沿X、Y、Z三个直角坐标轴移动自由度,因而用户操控控制器进行平移时,难以追踪到控制器的位置变化,进而,导致用户与周围环境的互动性差,影响用户体验。However, in the above-mentioned technology, the position of the controller cannot be measured, and the degree of freedom of movement along the three rectangular coordinate axes X, Y, and Z cannot be obtained. Therefore, it is difficult to track the position change of the controller when the user manipulates the controller for translation. , Resulting in poor interaction between users and the surrounding environment, affecting user experience.
本申请实施例通过在控制器上设置多点发光单元,采用视觉法追踪控制器的多个光点,实现对控制器的位置和姿态的追踪,从而实现控制器的6DOF追踪。In the embodiment of the present application, a multi-point light-emitting unit is set on the controller, and multiple light points of the controller are tracked by a visual method, so as to track the position and posture of the controller, thereby achieving 6DOF tracking of the controller.
本实施例提供一种控制器的追踪方法,该方法可以适用于图6所示的应用场景示意图,如图6所示,本实施例提供的应用场景中包括控制器的追踪处理器101、控制器102和图像获取装置103。图7为本实施例提供的一种控制器的示意图,如图7所示,控制器携带有多点发光单元,多点发光单元包括多个光点,本实施例对控制器的具体形态不做限制,可以根据实际应用场景进行设定,例如,该控制器的一种可能的使用形态为手柄,用户可以用手握住手柄,并控制手柄运动。上述控制器的追踪处理器101可以根据图像获取装置103获取上述控制器102在移动过程中上述多点发光单元的变换序列图像,进而,实现对控制器的位置和姿态的追踪,确定该控制器的六自由度追踪数据。This embodiment provides a tracking method for a controller. The method can be applied to the application scenario diagram shown in FIG. 6. As shown in FIG. 6, the application scenario provided by this embodiment includes the tracking processor 101 of the controller and the control The device 102 and the image acquisition device 103. Fig. 7 is a schematic diagram of a controller provided by this embodiment. As shown in Fig. 7, the controller carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes multiple light points. The restriction can be set according to actual application scenarios. For example, a possible use form of the controller is a handle, and the user can hold the handle with his hand and control the movement of the handle. The tracking processor 101 of the controller can obtain the converted sequence images of the multi-point light-emitting unit during the movement of the controller 102 according to the image obtaining device 103, and then track the position and posture of the controller, and determine the controller Six degrees of freedom tracking data.
上述应用场景仅为一种示例性场景,具体实施时,可以根据需求应用在不同场景中,例如,应用场景中包括追踪处理器和图像获取装置,以及手环、指环和手表中的任意一个,该手环、指环或者手表携带有多点发光单元,多点发光单元包括多个光点,从而实现手环、指环或者手表的追踪。The above application scenario is only an exemplary scenario. During specific implementation, it can be applied in different scenarios according to requirements. For example, the application scenario includes a tracking processor and an image acquisition device, as well as any one of a bracelet, a ring, and a watch. The bracelet, ring, or watch carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes a plurality of light points, so as to realize the tracking of the bracelet, ring or watch.
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技 术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。In the following, specific embodiments are used to describe in detail the technical solution of the present application and how the technical solution of the present application solves the above-mentioned technical problems. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. The embodiments of the present application will be described below in conjunction with the accompanying drawings.
图8为本申请实施例提供的一种控制器的追踪方法的流程示意图,所述控制器携带有多点发光单元,本实施例的执行主体可以为图6所示实施例中的控制器的追踪处理器。如图8所示,该方法可以包括:FIG. 8 is a schematic flowchart of a tracking method for a controller provided by an embodiment of the application. The controller carries a multi-point light-emitting unit. The execution subject of this embodiment may be the controller in the embodiment shown in FIG. 6 Trace the processor. As shown in Figure 8, the method may include:
S301:根据图像获取装置获取所述控制器在移动过程中所述多点发光单元的变换序列图像,确定所述序列图像中光点的变换方式。S301: Obtain, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image.
示例性的,上述图像获取装置可以为一体机自带的单目、双目或多目相机,图像获取装置为双目或多目相机时,每个相机都是独立地获取控制器在移动过程中多点发光单元的变换序列图像。双目或多目相机可以扩大追踪的范围,但本实施例同样适用于单目相机。以单目相机为例,该相机在控制器的移动过程中拍摄多点发光单元的图像,上述多点发光单元包括多个光点,在本实施例中光点个数大于等于4个,具体个数可以根据实际应用场景进行设定,每个光点都按照不同的变换方式进行变换,本实施例对变换方式不做限定,例如第一光点变换方式是RGBRGB;第二光点的变换方式是RRGGBB;第三光点变换方式是101010;第四光点变换方式是110011,其中,RGB分别表示红绿蓝,1、0分别表示亮、暗。可以理解,颜色变换不限于红绿蓝,可以是红橙黄绿青蓝紫等;亮度等级也不限于全亮全暗两种,可以是多个亮度等级,例如全亮、3/4亮、半亮、1/4亮、暗等,变换方式也可以同时包括颜色变换和亮度等级变换。通过图像获取装置获取多点发光单元的变换序列图像,根据序列图像中的光点的颜色、亮度等级等信息,能够确定序列图像中光点的变换方式。其中,本实施例对获得序列图像中的光点的颜色、亮度等级等信息的实现方式不做限定,例如可以根据实际情况,设定各个预设颜色的颜色值的差值阈值,如果光点的颜色值与某一预设颜色的颜色值的差值小于第一预设差值阈值,则该光点的颜色为该预设颜色;同理,可以根据实际情况,设定各个亮度等级的光斑直径的差值阈值,如果光点的亮度值与某一亮度等级的光斑直径的差值小于第二预设差值阈值,则该光点的亮度为该亮度等级。Exemplarily, the aforementioned image acquisition device may be a monocular, binocular, or multi-lens camera that comes with the all-in-one machine. When the image acquisition device is a binocular or multi-lens camera, each camera independently acquires the controller during the movement process. Transformation sequence image of multi-point lighting unit. Binocular or multi-eye cameras can expand the tracking range, but this embodiment is also applicable to monocular cameras. Take a monocular camera as an example. The camera captures an image of a multi-point light-emitting unit during the movement of the controller. The multi-point light-emitting unit includes multiple light points. In this embodiment, the number of light points is greater than or equal to 4. The number can be set according to the actual application scenario. Each light point is transformed according to a different transformation method. This embodiment does not limit the transformation method. For example, the first light point transformation method is RGBRGB; the second light point transformation The mode is RRGGBB; the third light point conversion mode is 101010; the fourth light point conversion mode is 110011, where RGB represents red, green, and blue respectively, and 1, 0 represents light and dark respectively. It can be understood that the color conversion is not limited to red, green and blue, and can be red, orange, yellow, green, blue, blue, and purple; the brightness level is not limited to full bright and full dark, and can be multiple brightness levels, such as full brightness, 3/4 brightness, and half brightness. Bright, 1/4 bright, dark, etc., the conversion method can also include color conversion and brightness level conversion at the same time. The conversion sequence image of the multi-point light-emitting unit is acquired by the image acquisition device, and the conversion mode of the light point in the sequence image can be determined according to the information such as the color and brightness level of the light spots in the sequence image. Among them, this embodiment does not limit the implementation of obtaining information such as the color and brightness level of the light spots in the sequence image. For example, the difference threshold of the color value of each preset color can be set according to the actual situation. If the difference between the color value and the color value of a preset color is less than the first preset difference threshold, then the color of the light spot is the preset color; similarly, the brightness level of each brightness level can be set according to the actual situation. The difference threshold of the spot diameter. If the difference between the brightness value of the spot and the spot diameter of a certain brightness level is less than the second preset difference threshold, the brightness of the spot is the brightness level.
上述多点发光单元的变化频率可以根据实际应用场景进行设定,图像获取装置的拍摄频率应与多点发光单元的变化频率保持一致,以使图像获取装置的拍摄与多点发光单元变换同步,图像获取装置能够正好拍下多点发光单元中各个灯的变换,从而能够准确确定序列图像中光点的变换方式。The change frequency of the above-mentioned multi-point light-emitting unit can be set according to the actual application scenario. The shooting frequency of the image acquisition device should be consistent with the change frequency of the multi-point light-emitting unit, so that the shooting of the image acquisition device is synchronized with the conversion of the multi-point light-emitting unit. The image acquisition device can precisely capture the transformation of each lamp in the multi-point light-emitting unit, so that the transformation mode of the light points in the sequence image can be accurately determined.
S302:根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识。S302: Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
示例性的,上述多点发光单元包括多个光点,每个光点都按照不同的变换方式进行变换,这样,就能根据序列图像中光点的变换方式,确定序列图像中目标光点对应的标识。其中,目标光点的数量可以根据实际应用场景进行设定,例如多点发光单元的光点数量较少时,目标光点的数量可以为多点发光单元的全部光点;多点发光单元的光点数量较多时,目标光点的数量可以为多点发光单元中的部分光点。目标光点的选择也可以根据实际应用场景进行设定,例如在控制器移动过程中,一直在图像获取装置能够拍摄到的地方的光点。Exemplarily, the above-mentioned multi-point light-emitting unit includes a plurality of light points, and each light point is transformed according to a different transformation method. In this way, the corresponding light point in the sequence image can be determined according to the transformation method of the light point in the sequence image. Of the logo. Among them, the number of target light points can be set according to actual application scenarios. For example, when the number of light points of the multi-point light-emitting unit is small, the number of target light points can be all light points of the multi-point light-emitting unit; When the number of light spots is large, the number of target light spots may be part of the light spots in the multi-point light-emitting unit. The selection of the target light point can also be set according to actual application scenarios, for example, during the movement of the controller, a light point that is always in a place that can be photographed by the image acquisition device.
S303:基于所述目标光点对应的标识,确定所述目标光点在所述序列图像中每一帧图像中的映射位置。S303: Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
示例性的,通过开源计算机视觉库(Open Source Computer Vision Library,简称OpenCV)提取上述序列图像中的目标光点,并获取目标光点的横纵像素坐标,从而得到每个标识对应的光点在上述序列图像中每一帧图像中的映射位置。Exemplarily, through the Open Source Computer Vision Library (OpenCV) to extract the target light point in the above sequence image, and obtain the horizontal and vertical pixel coordinates of the target light point, so as to obtain the light point corresponding to each logo. The mapping position in each frame of the above sequence of images.
S304:根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的六自由度追踪数据。S304: Obtain six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
将上述映射位置、目标光点的初始位置和控制器在移动过程中图像获取装置的位置输入OpenCV,即可得到控制器在移动过程中目标光点的位置,从而确定出控制器的位置和姿态。Input the above-mentioned mapping position, the initial position of the target light spot and the position of the image acquisition device during the movement of the controller into OpenCV, then the position of the target light spot during the movement of the controller can be obtained, and the position and posture of the controller can be determined .
申请实施例提供的控制器的追踪方法,该控制器携带有多点发光单元,该方法通过图像获取装置获取控制器在移动过程中上述多点发光单元的变换序列图像,确定出述序列图像中光点的变换方式;这里,各个光点的变换方式是不同的,因此本申请实施例能够根据光点的变换方式,准确确定序列图像中目标光点对应的标识;进而基于上述目标光点对应的标识,确定目标光点在上述序列图像中每一帧图像中的映射位置;根据上述映射位置和上述目标光点的初始位置,得到控制器在移动过程中目标光点相对于该图像获取装置的位置,进而根据该位置和控制器在移动过程中图像获取装置的位置,得到目标光点的位置,由于控制器中的多点发光单元的三维几何结构是不变的,得到了目标光点的位置,就能确定出控制器的三维空间位置及旋转姿态,实现了控制器的六自由度追踪,提高了用户与周围环境的互动性。同时,本申请实施例提供的控制器的追踪方法不需要安装额外的装置,例如采用激光定位所需的激光检测装置等,因此节省了成本与空间。The tracking method of the controller provided by the application embodiment, the controller carries a multi-point light-emitting unit, and the method acquires the transformed sequence image of the above-mentioned multi-point light-emitting unit during the movement of the controller through an image acquisition device, and determines that the sequence image is The light point transformation method; here, the transformation method of each light point is different, so the embodiment of the present application can accurately determine the identification corresponding to the target light point in the sequence image according to the light point transformation method; and then based on the above-mentioned target light point correspondence To determine the mapping position of the target light spot in each frame of the image sequence; according to the above mapping position and the initial position of the target light spot, the target light spot relative to the image acquisition device during the movement of the controller is obtained According to the position and the position of the image acquisition device during the movement of the controller, the position of the target light spot is obtained. Since the three-dimensional geometric structure of the multi-point light-emitting unit in the controller is unchanged, the target light spot is obtained The position of the controller can determine the three-dimensional space position and rotation attitude of the controller, realize the six-degree-of-freedom tracking of the controller, and improve the interaction between the user and the surrounding environment. At the same time, the tracking method of the controller provided in the embodiments of the present application does not require installation of additional devices, such as laser detection devices required for laser positioning, etc., thereby saving cost and space.
另外,本申请实施例为了解决追踪的数据不平滑和延迟的问题,还考虑IMU发送的对控制器进行姿态追踪的结果。如图9所示,图9为本申请实施例提供的另一种控 制器的追踪方法的流程示意图,所述控制器携带有多点发光单元,本实施例的执行主体可以为图6所示实施例中的控制器的追踪处理器。如图9所示,该方法包括:In addition, in order to solve the problem of unsmooth and delayed tracking data, the embodiment of the present application also considers the result of attitude tracking of the controller sent by the IMU. As shown in FIG. 9, FIG. 9 is a schematic flowchart of another tracking method of a controller provided by an embodiment of the application. The controller carries a multi-point light-emitting unit. The execution subject of this embodiment may be as shown in FIG. 6 The tracking processor of the controller in the embodiment. As shown in Figure 9, the method includes:
S401:根据图像获取装置获取所述控制器在移动过程中所述多点发光单元的变换序列图像,确定所述序列图像中光点的变换方式。S401: Acquire, according to an image acquisition device, a transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine a transformation mode of light points in the sequence image.
S402:根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识。S402: Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
S403:基于所述目标光点对应的标识,确定所述目标光点在所述序列图像中每一帧图像中的映射位置。S403: Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
该S401-S403与上述S301-S303实现方式相同,在此不再赘述。The implementation of S401-S403 is the same as the foregoing S301-S303, and will not be repeated here.
S404:获取IMU发送的对所述控制器进行姿态追踪的结果。S404: Obtain the attitude tracking result of the controller sent by the IMU.
在本实施例中,根据映射位置、目标光点的初始位置和控制器在移动过程中图像获取装置的位置,只能够确定在每一帧图像拍摄时,控制器的位置和姿态,造成追踪的数据不平滑,并且上述追踪方法存在延迟的问题。而IMU姿态追踪的更新速率快、延迟更低,并且能够得到平滑的追踪数据。In this embodiment, according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller, it is only possible to determine the position and posture of the controller when each frame of image is taken, resulting in tracking problems. The data is not smooth, and the above tracking method has the problem of delay. The update rate of IMU attitude tracking is fast, the delay is lower, and smooth tracking data can be obtained.
基于此,本实施例需要获取IMU发送的对该控制器进行姿态追踪的结果。Based on this, this embodiment needs to obtain the posture tracking result of the controller sent by the IMU.
本实施例对S404与S401-S403的先后顺序不做限定,即可以先执行S404,再执行S401-S403,也可以先执行S401-S403,再执行S404。This embodiment does not limit the sequence of S404 and S401-S403, that is, S404 may be executed first, and then S401-S403, or S401-S403 may be executed first, and then S404 may be executed.
在S404之后,执行如下步骤:根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的六自由度追踪数据。After S404, perform the following steps: obtain the six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller .
可选地,上述根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的六自由度追踪数据,包括:Optionally, obtaining the six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller includes:
S4051:根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的位置和姿态。S4051: Obtain the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
将上述映射位置、目标光点的初始位置和控制器在移动过程中图像获取装置的位置输入OpenCV,即可得到控制器在移动过程中目标光点的位置,从而确定出在每一帧图像拍摄时,控制器的位置和姿态。Input the above-mentioned mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller into OpenCV, then the position of the target light spot during the movement of the controller can be obtained, so as to determine the image capture in each frame When, the position and attitude of the controller.
可选地,所述目标光点的数量不少于预设数量;Optionally, the number of the target light points is not less than a preset number;
所述根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的位置和姿态,包括;The obtaining the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller includes;
根据所述映射位置、所述目标光点的初始位置,通过PnP算法,获得所述目标光点相对于所述图像获取装置的位置;Obtaining the position of the target light spot relative to the image acquisition device through a PnP algorithm according to the mapping position and the initial position of the target light spot;
根据所述目标光点相对于所述图像获取装置的位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述目标光点的位置;Obtaining the position of the target light spot according to the position of the target light spot relative to the image acquisition device and the position of the image acquisition device during the movement of the controller;
根据所述目标光点的位置,获得所述控制器的位置和姿态。According to the position of the target light spot, the position and posture of the controller are obtained.
示例性的,上述目标光点的数量不少于预设数量,预设数量可以根据实际应用场景进行设定,在本实施例中,目标光点的数量至少为4个,才能通过通过PnP算法,获得目标光点相对于图像获取装置的位置。将上述映射位置和上述目标光点的初始位置输入OpenCV后,通过PnP算法能够得到控制器在移动过程中目标光点相对于该图像获取装置的位置,进而根据在移动过程中目标光点相对于该图像获取装置的位置和该图像获取装置的位置,能够得到目标光点的位置。其中,PnP算法是求解3D到2D点对运动的方法,描述了当知道n(n≥4)个3D空间点及其映射位置时,如何得到相机的位姿,相机位姿与n个3D空间点的位置是相对的关系,因此,当知道了相机的位姿,以及n个3D空间点映射位置时,可以通过PnP算法得到n个3D空间点的位置。由于控制器中的多点发光单元的三维几何结构是不变的,得到了目标光点的位置,就能确定出控制器的三维空间位置及旋转姿态,从而得到控制器的六自由度追踪数据。Exemplarily, the number of the above-mentioned target light points is not less than the preset number, and the preset number can be set according to actual application scenarios. In this embodiment, the number of target light points is at least 4 before the PnP algorithm can be passed. , To obtain the position of the target light spot relative to the image acquisition device. After inputting the above-mentioned mapping position and the initial position of the above-mentioned target light spot into OpenCV, the position of the target light spot relative to the image acquisition device during the movement of the controller can be obtained through the PnP algorithm, and then according to the relative position of the target light spot during the movement The position of the image acquisition device and the position of the image acquisition device can obtain the position of the target light spot. Among them, the PnP algorithm is a method for solving 3D to 2D point-pair motion. It describes how to get the pose of the camera when n (n≥4) 3D space points and their mapping positions are known. The camera pose and n 3D spaces The position of the point is a relative relationship. Therefore, when the pose of the camera and the mapping position of the n 3D space points are known, the position of the n 3D space points can be obtained through the PnP algorithm. Since the three-dimensional geometric structure of the multi-point light-emitting unit in the controller is unchanged, the position of the target light point is obtained, and the three-dimensional space position and rotation attitude of the controller can be determined, thereby obtaining the six-degree-of-freedom tracking data of the controller .
可以理解,上述图像获取装置为双目或多目相机时,根据目标光点在各相机的序列图像中每一帧图像中的映射位置、目标光点的初始位置,通过PnP算法,获得目标光点相对于各相机的位置,并基于目标光点相对于各相机的位置和控制器在移动过程中各图像获取装置的位置,获得两组或多组目标光点的位置,对这两组或多组目标光点的位置进行求和或者加权求和,获得目标光点的位置,从而提高了获得目标光点的位置的准确性。It can be understood that when the above-mentioned image acquisition device is a binocular or multi-lens camera, the target light is obtained through the PnP algorithm according to the mapping position of the target light point in each frame of the image sequence of each camera and the initial position of the target light point. The position of the point relative to each camera, and based on the position of the target light point relative to each camera and the position of each image acquisition device during the movement of the controller, the positions of two or more sets of target light points are obtained. The positions of multiple groups of target light points are summed or weighted to obtain the position of the target light point, thereby improving the accuracy of obtaining the position of the target light point.
S4052:将所述控制器的位置和姿态,以及所述IMU发送的对所述控制器进行姿态追踪的结果,进行融合,获得所述控制器的六自由度追踪数据。S4052: Fuse the position and posture of the controller and the result of posture tracking of the controller sent by the IMU to obtain six-degree-of-freedom tracking data of the controller.
示例性的,将所述控制器的位置和姿态,以及IMU发送的对控制器进行姿态追踪的结果,输入OpenCV,通过预设融合算法进行相互补偿、矫正、平滑和预测,进而获得控制器的六自由度追踪数据。Exemplarily, the position and posture of the controller and the result of posture tracking of the controller sent by the IMU are input into OpenCV, and mutual compensation, correction, smoothing, and prediction are performed through a preset fusion algorithm to obtain the controller’s Six degrees of freedom tracking data.
本实施例中,通过将控制器的位置和姿态,以及IMU发送的对控制器进行姿态追踪的结果,进行融合,可以充分利用IMU姿态追踪的更新速率、平滑性等方面的优点,又克服了IMU姿态追踪存在的漂移、误差累积,难以追踪到控制器的位置变化,无法实现6DOF追踪的问题,同时,也解决了根据映射位置、目标光点的初始位置和控制器在移动过程中图像获取装置的位置,造成的追踪数据不平滑、延迟的问题。In this embodiment, by fusing the position and posture of the controller and the result of posture tracking of the controller sent by the IMU, the advantages of the update rate and smoothness of the posture tracking of the IMU can be fully utilized, and the advantages of The drift and error accumulation in IMU attitude tracking make it difficult to track the position change of the controller, and the problem of 6DOF tracking cannot be achieved. At the same time, it also solves the problem of image acquisition according to the mapping position, the initial position of the target light spot and the controller during the movement. The location of the device causes the tracking data to be unsmooth and delayed.
另外,本申请实施例中控制器携带有多点发光单元,通过图像获取装置获取控制 器在移动过程中上述多点发光单元的变换序列图像,确定出述序列图像中光点的变换方式;这里,各个光点的变换方式是不同的,因此本申请实施例能够根据光点的变换方式,准确确定序列图像中目标光点对应的标识;进而基于上述目标光点对应的标识,确定目标光点在上述序列图像中每一帧图像中的映射位置;根据上述映射位置和上述目标光点的初始位置,得到控制器在移动过程中目标光点相对于该图像获取装置的位置,进而根据在移动过程中目标光点相对于该图像获取装置的位置和该图像获取装置的位置,得到目标光点的位置,由于控制器中的多点发光单元的三维几何结构是不变的,得到了目标光点的位置,就能确定出控制器的位置和姿态;通过将控制器的位置和姿态,以及IMU发送的对控制器进行姿态追踪的结果,进行融合,可以充分利用IMU姿态追踪的更新速率、平滑性等方面的优点,解决了根据映射位置、目标光点的初始位置和控制器在移动过程中图像获取装置的位置,造成的追踪数据不平滑、延迟的问题。In addition, in the embodiment of the present application, the controller carries a multi-point light-emitting unit, and the image acquisition device acquires the transformed sequence images of the above-mentioned multi-point light-emitting unit during the movement of the controller, and determines the transformation mode of the light points in the sequence image; here The conversion mode of each light spot is different, so the embodiment of the present application can accurately determine the identification of the target light spot in the sequence image according to the conversion mode of the light spot; and then determine the target light spot based on the identification corresponding to the above-mentioned target light spot The mapping position in each frame of the image in the above sequence of images; according to the mapping position and the initial position of the target light point, the position of the target light point relative to the image acquisition device during the movement of the controller is obtained, and then according to the moving During the process, the position of the target light spot relative to the position of the image acquisition device and the position of the image acquisition device is obtained. Since the three-dimensional geometric structure of the multi-point light-emitting unit in the controller is unchanged, the target light is obtained. The position and posture of the controller can be determined by the position of the point; by fusing the position and posture of the controller and the result of the posture tracking of the controller sent by the IMU, the update rate of the posture tracking of the IMU can be fully utilized. The advantages of smoothness and other aspects solve the problem of unsmooth and delayed tracking data caused by the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
图10为本申请实施例提供的再一种控制器的追踪方法的流程示意图,所述控制器携带有多点发光单元,本实施例的执行主体可以为图6所示实施例中的控制器的追踪处理器。如图10所示,该方法包括:FIG. 10 is a schematic flowchart of another tracking method for a controller provided by an embodiment of the application. The controller carries a multi-point light-emitting unit. The execution subject of this embodiment may be the controller in the embodiment shown in FIG. 6 Tracking processor. As shown in Figure 10, the method includes:
S501:提取所述序列图像中的光点。S501: Extract light points in the sequence of images.
S502:基于所述光点,识别出所述序列图像中相邻帧的相同点。S502: Based on the light points, identify the same points of adjacent frames in the sequence of images.
可选地,所述基于所述光点,识别出所述序列图像中相邻帧的相同点,包括:Optionally, the identifying the same points of adjacent frames in the sequence of images based on the light points includes:
获得所述序列图像中相邻帧的光点中心之间的距离;Obtaining the distance between the centers of light spots of adjacent frames in the sequence of images;
根据所述距离和预设距离阈值,识别出所述序列图像中相邻帧的相同点。According to the distance and a preset distance threshold, the same points of adjacent frames in the sequence of images are identified.
示例性的,通过OpenCV提取上述序列图像中的目标光点,并获取目标光点的横纵像素坐标,相邻帧的光点中心之间的距离
Figure PCTCN2021081910-appb-000003
其中,u 1、v 1分别为光点在前一帧图像的横纵像素坐标、u 2、v 2分别为光点在后一帧图像的横纵像素坐标,预设距离阈值为d 0,如果d 1≤d 0,判定这一光点为相同点;反之d 1>d 0,判定这两光点不为相同点。
Exemplarily, extract the target light point in the above sequence image through OpenCV, and obtain the horizontal and vertical pixel coordinates of the target light point, and the distance between the light point centers of adjacent frames
Figure PCTCN2021081910-appb-000003
Among them, u 1 and v 1 are the horizontal and vertical pixel coordinates of the light spot in the previous frame of image, u 2 , v 2 are the horizontal and vertical pixel coordinates of the light spot in the next frame of image respectively, and the preset distance threshold is d 0 , If d 1 ≤ d 0 , it is judged that this light spot is the same point; otherwise, d 1 >d 0 , it is judged that the two light spots are not the same point.
通过S503:判断所述相同点是否连续。Through S503: it is judged whether the same points are continuous.
若所述相同点连续,则执行S5041,若所述相同点不连续,则执行S5042-S5043。If the same points are continuous, S5041 is executed, and if the same points are not continuous, S5042-S5043 are executed.
示例性的,控制器在移动过程中,灯组中有时会有光点被转到图像获取装置拍摄不到的地方,这时拍摄的图像就没有该光点(前一帧图像中有该光点,而后一帧图像中没有该光点);灯组中有时也会有之前拍摄不到,又重新出现被拍摄到的光点(后一帧图像中没有该光点,而后一帧图像中有该光点),这种情况下,在序列图像中的 有些图像中无法找到相同点,相同点在序列图像中不连续。反之,灯组的光点一直没有被转到图像获取装置拍摄不到的地方,在序列图像中每一帧都能找到相同点,相同点在序列图像中连续。Exemplarily, when the controller is moving, sometimes a light spot in the lamp group is turned to a place that cannot be captured by the image acquisition device, and the image captured at this time does not have the light spot (the light spot in the previous frame of image) There is no such light spot in the next frame of image); sometimes there is a light spot in the light group that could not be taken before, and then reappears (the light spot is not in the latter frame of image, and the light spot in the next frame of image There is the light spot). In this case, the same point cannot be found in some images in the sequence image, and the same point is not continuous in the sequence image. On the contrary, the light point of the lamp group has not been transferred to a place that cannot be captured by the image acquisition device. The same point can be found in each frame of the sequence image, and the same point is continuous in the sequence image.
S5041:根据所述光点的初始标识,获得所述序列图像中目标光点对应的标识。S5041: Obtain an identifier corresponding to the target light point in the sequence image according to the initial identifier of the light point.
如果相同点连续,即在序列图像中每一帧都能找到该光点,那么就可以根据该光点的初始标识,直接确定在序列图像中的每一帧图像中该光点对应的标识,无需通过光点的变换方式来获得序列图像中目标光点对应的标识,简化了操作流程,提高了追踪效率。If the same point is continuous, that is, the light point can be found in each frame of the sequence image, then the light point corresponding to the light point in each frame of the sequence image can be directly determined according to the initial identification of the light point. It is not necessary to obtain the mark corresponding to the target light point in the sequence image through the light point conversion method, which simplifies the operation process and improves the tracking efficiency.
S5042:根据图像获取装置获取所述控制器在移动过程中所述多点发光单元的变换序列图像,确定所述序列图像中光点的变换方式。S5042: Obtain, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image.
示例性的,相同点在序列图像中不连续,那么就无法在序列图像中找到断开前与断开后的相同点,进而就无法根据该光点的初始标识,确定在序列图像中的该光点断开后的图像中该光点对应的标识,因此对于之前拍摄不到,又重新出现被拍摄到的光点,在其重新出现时,需要通过光点的变换方式来获得序列图像中目标光点对应的标识。Exemplarily, if the same point is not continuous in the sequence image, then the same point before and after the disconnection cannot be found in the sequence image, and then it is impossible to determine the same point in the sequence image according to the initial identification of the light point. The mark corresponding to the light spot in the image after the light spot is disconnected. Therefore, for the light spot that was not photographed before and reappears, when it reappears, it is necessary to obtain the sequence image through the light spot transformation method. The identifier corresponding to the target light spot.
可选地,所述光点为LED光点,所述变换方式包括颜色变换和/或亮度等级变换,Optionally, the light spot is an LED light spot, and the conversion method includes color conversion and/or brightness level conversion,
所述确定所述序列图像中光点的变换方式,可以通过以下方式实现:The determination of the transformation mode of the light points in the sequence image may be implemented in the following manners:
根据所述相同点的颜色和/或亮度等级,确定所述相同点在一组序列图像中的颜色变换和/或亮度等级变换。According to the color and/or brightness level of the same point, determine the color transformation and/or brightness level transformation of the same point in a set of sequence images.
示例性的,相同点在序列图像中不连续的LED光点,在其重新出现时,基于相同点的颜色和/或亮度等级,确定相同点在一组序列图像中的颜色变换和/或亮度等级变换,其中一组序列图像的帧数与光点的变换周期有关,例如光点变换周期为四次一周期,那么一组序列图像就是连续的四帧图像。Exemplarily, when the same point is discontinuous in the sequence image, when it reappears, based on the color and/or brightness level of the same point, determine the color transformation and/or brightness of the same point in a set of sequence images Level conversion, in which the number of frames of a group of sequential images is related to the conversion period of the light spot, for example, the conversion period of the light spot is four times a period, then a group of sequential images is a continuous four-frame image.
可选地,所述光点为红外光点,所述变换方式包括红外亮暗等级变换,Optionally, the light spot is an infrared light spot, and the conversion method includes infrared light-dark level conversion,
所述确定所述序列图像中光点的变换方式,还可以通过以下方式实现:The determination of the transformation mode of the light points in the sequence image may also be implemented in the following manner:
根据所述相同点的红外亮暗等级,得到所述相同点在一组序列图像中的红外亮暗等级变换。According to the infrared light-dark level of the same point, the infrared light-dark level transformation of the same point in a set of sequence images is obtained.
示例性的,相同点在序列图像中不连续的红外光点,在其重新出现时,基于相同点的红外亮暗等级等级,确定相同点在一组序列图像中的红外亮暗等级变换,一组序列图像的帧数可参照上述实施例,此处不再赘述。Exemplarily, when the same point is discontinuous in the sequence image, when it reappears, based on the infrared light and dark level of the same point, determine the infrared light and dark level transformation of the same point in a set of sequence images, one For the number of frames of the group sequence image, refer to the above-mentioned embodiment, which will not be repeated here.
S5043:根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识。S5043: Obtain an identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point.
可选地,所述根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识,包括:Optionally, the obtaining the identifier corresponding to the target light point in the sequence image according to the light point transformation mode includes:
根据所述光点的变换方式,以及预设的变换方式与光点标识的对应关系,获得所述序列图像中目标光点对应的标识。According to the conversion mode of the light spot and the correspondence between the preset conversion mode and the light spot identification, the identification corresponding to the target light spot in the sequence image is obtained.
示例性的,基于预设的变换方式与光点标识的对应关系,找到与相同点在一组序列图像中的颜色变换方式相同的预设的变换方式,该预设的变换方式对应的标识就是这些相同点对应的标识。例如相同点在一组序列图像中的颜色变换为RGBRGB,预设变换方式RGBRGB对应的标识为第一光点,那么该相同点为第一光点;相同点在一组序列图像中的颜色变换为RRGGBB,预设变换方式RRGGBB对应的标识为第二光点,那么该相同点为第二光点;相同点在一组序列图像中的亮度等级变换为101010,预设变换方式101010对应的标识为第三光点,那么该相同点为第三光点,其中,RGB分别表示红绿蓝,1、0分别表示亮、暗。Exemplarily, based on the corresponding relationship between the preset conversion method and the light spot identifier, a preset conversion method that is the same as the color conversion method of the same point in a set of sequence images is found, and the identifier corresponding to the preset conversion method is These similarities correspond to the identification. For example, the color of the same point in a set of sequence images is transformed into RGBRGB, and the corresponding identification of the preset transformation method RGBRGB is the first light point, then the same point is the first light point; the color transformation of the same point in a set of serial images It is RRGGBB, and the identifier corresponding to the preset transformation method RRGGBB is the second light spot, then the same point is the second light spot; the brightness level of the same point in a set of sequence images is transformed to 101010, and the identifier corresponding to the preset transformation method 101010 If it is the third light point, then the same point is the third light point, where RGB represents red, green and blue respectively, and 1, 0 represents light and dark respectively.
根据所述光点的变换方式,以及预设的变换方式与光点标识的对应关系,能够更加准确、方便地确定序列图像中目标光点对应的标识。According to the conversion mode of the light spot and the corresponding relationship between the preset conversion mode and the light spot identification, the identification corresponding to the target light spot in the sequence image can be determined more accurately and conveniently.
S505:基于所述目标光点对应的标识,确定所述目标光点在所述序列图像中每一帧图像中的映射位置。S505: Based on the identifier corresponding to the target light spot, determine the mapping position of the target light spot in each frame of the image sequence.
S506:根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的六自由度追踪数据。S506: Obtain six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller.
该S505-S506与上述S303-S304实现方式相同,在此不再赘述。The implementation of S505-S506 is the same as the above-mentioned S303-S304, and will not be repeated here.
本申请实施例提供的控制器的追踪方法,该控制器携带有多点发光单元,该方法通过提取序列图像中的光点,并基于光点识别出序列图像中相邻帧的相同点,如果相同点连续,即对于一个光点,在序列图像中每一帧都能找到该光点,那么就可以根据该光点的初始标识,直接确定在序列图像中的每一帧图像中该光点对应的标识,无需通过光点的变换方式来获得序列图像中目标光点对应的标识,简化了操作流程,提高了追踪效率;如果相同点在序列图像中不连续,在其重新出现时,需要通过光点的变换方式来确定序列图像中目标光点对应的标识;通过图像获取装置获取控制器在移动过程中上述多点发光单元的变换序列图像,确定出述序列图像中光点的变换方式;各个光点的变换方式是不同的,因此根据光点的变换方式,以及预设的变换方式与光点标识的对应关系,能够更加准确、方便地确定序列图像中目标光点对应的标识;进而基于上述目标光点对应的标识,确定目标光点在上述序列图像中每一帧图像中的映射位置;根据上述映射位置和上述目标光点的初始位置,得到控制器在移动过程中目标 光点相对于该图像获取装置的位置,进而根据在移动过程中目标光点相对于该图像获取装置的位置和该图像获取装置的位置,得到目标光点的位置,由于控制器中的多点发光单元的三维几何结构是不变的,得到了目标光点的位置,就能确定出控制器的三维空间位置及旋转姿态,实现了控制器的六自由度追踪,提高了用户与周围环境的互动性。The tracking method of the controller provided by the embodiment of the present application, the controller carries a multi-point light-emitting unit, the method extracts light points in a sequence image, and based on the light points, identifies the same point in adjacent frames in the sequence image, if The same point is continuous, that is, for a light point, the light point can be found in each frame of the sequence image, then the light point in each frame of the sequence image can be directly determined according to the initial identification of the light point The corresponding mark does not need to use the light point transformation method to obtain the mark corresponding to the target light point in the sequence image, which simplifies the operation process and improves the tracking efficiency; if the same point is not continuous in the sequence image, it is required when it reappears The identification of the target light spot in the sequence image is determined by the light point transformation method; the image acquisition device is used to acquire the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and the light point transformation method in the sequence image is determined ; The transformation method of each light spot is different, so according to the transformation method of the light spot and the corresponding relationship between the preset transformation method and the light spot identification, the identification of the target light spot in the sequence image can be determined more accurately and conveniently; Then, based on the identifier corresponding to the target light point, the mapping position of the target light point in each frame of the image sequence is determined; according to the mapping position and the initial position of the target light point, the target light during the movement of the controller is obtained. The position of the point relative to the image acquisition device, and then according to the position of the target light spot relative to the image acquisition device and the position of the image acquisition device during the movement, the position of the target light spot is obtained, due to the multi-point light emission in the controller The three-dimensional geometric structure of the unit is unchanged. Once the position of the target light spot is obtained, the three-dimensional space position and rotation posture of the controller can be determined, the six-degree-of-freedom tracking of the controller is realized, and the interaction between the user and the surrounding environment is improved. sex.
对应于上文实施例的控制器的追踪方法,图11为本申请实施例提供的一种控制器的追踪装置的结构示意图。为了便于说明,仅示出了与本申请实施例相关的部分。如图11所示,控制器的追踪装置60包括:第一确定模块601、第一获得模块602、第二确定模块603和第二获得模块604。Corresponding to the tracking method of the controller in the above embodiment, FIG. 11 is a schematic structural diagram of a tracking device for a controller provided in an embodiment of the application. For ease of description, only the parts related to the embodiments of the present application are shown. As shown in FIG. 11, the tracking device 60 of the controller includes: a first determining module 601, a first obtaining module 602, a second determining module 603, and a second obtaining module 604.
第一确定模块601,用于根据图像获取装置获取所述控制器在移动过程中所述多点发光单元的变换序列图像,确定所述序列图像中光点的变换方式;The first determining module 601 is configured to acquire, according to the image acquisition device, the transformed sequence image of the multi-point light-emitting unit during the movement of the controller, and determine the transformation mode of the light points in the sequence image;
第一获得模块602,用于根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识;The first obtaining module 602 is configured to obtain the identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point;
第二确定模块603,用于基于所述目标光点对应的标识,确定所述目标光点在所述序列图像中每一帧图像中的映射位置;The second determining module 603 is configured to determine the mapping position of the target light point in each frame of the image sequence based on the identifier corresponding to the target light point;
第二获得模块604,用于根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的六自由度追踪数据。The second obtaining module 604 is configured to obtain the six-degree-of-freedom tracking data of the controller according to the mapping position, the initial position of the target light spot, and the position of the image obtaining device during the movement of the controller .
本申请实施例提供的装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本申请实施例此处不再赘述。The device provided in the embodiment of the present application can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details of the embodiments of the present application are not repeated here.
图12为本申请实施例提供的另一种控制器的追踪装置的结构示意图。如图12所示,本实施例提供的控制器的追踪装置60,在图11实施例的基础上,还包括:获取模块605、处理模块606。FIG. 12 is a schematic structural diagram of another tracking device for a controller provided by an embodiment of the application. As shown in FIG. 12, the tracking device 60 of the controller provided in this embodiment, on the basis of the embodiment in FIG. 11, further includes: an acquisition module 605 and a processing module 606.
可选地,获取模块605,用于在所述第二获得模块604获得所述控制器的六自由度追踪数据之前,Optionally, the obtaining module 605 is configured to, before the second obtaining module 604 obtains the six-degree-of-freedom tracking data of the controller,
获取IMU发送的对所述控制器进行姿态追踪的结果;Acquiring the result of the attitude tracking of the controller sent by the IMU;
所述第二获得模块604获得所述控制器的六自由度追踪数据,包括:The second obtaining module 604 obtains the six-degree-of-freedom tracking data of the controller, including:
根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的位置和姿态;Obtaining the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image acquisition device during the movement of the controller;
将所述控制器的位置和姿态,以及所述IMU发送的对所述控制器进行姿态追踪的结果,进行融合,获得所述控制器的六自由度追踪数据。The position and posture of the controller and the result of posture tracking of the controller sent by the IMU are merged to obtain the six-degree-of-freedom tracking data of the controller.
可选地,处理模块606,用于提取所述序列图像中的光点;Optionally, the processing module 606 is configured to extract light points in the sequence of images;
基于所述光点,识别出所述序列图像中相邻帧的相同点;Based on the light points, identifying the same points of adjacent frames in the sequence of images;
判断所述相同点是否连续;Judge whether the same points are continuous;
若所述相同点连续,则第一获得模块602根据所述光点的初始标识,获得所述序列图像中目标光点对应的标识;If the same points are continuous, the first obtaining module 602 obtains the identifier corresponding to the target light point in the sequence image according to the initial identifier of the light point;
若所述相同点不连续,则第一确定模块601执行所述根据图像获取装置获取所述控制器在移动过程中所述多点发光单元的变换序列图像,确定所述序列图像中光点的变换方式的步骤。If the same points are not continuous, the first determining module 601 executes the acquisition of the transformed sequence image of the multi-point light-emitting unit during the movement of the controller according to the image acquisition device, and determines the position of the light point in the sequence image. Steps to change the way.
可选地,所述光点为LED光点,所述变换方式包括颜色变换和/或亮度等级变换,Optionally, the light spot is an LED light spot, and the conversion method includes color conversion and/or brightness level conversion,
所述第一确定模块601确定所述序列图像中光点的变换方式,包括:The first determining module 601 determines the transformation mode of light points in the sequence image, including:
根据所述相同点的颜色和/或亮度等级,确定所述相同点在一组序列图像中的颜色变换和/或亮度等级变换。According to the color and/or brightness level of the same point, determine the color transformation and/or brightness level transformation of the same point in a set of sequence images.
可选地,所述光点为红外光点,所述变换方式包括红外亮暗等级变换,Optionally, the light spot is an infrared light spot, and the conversion method includes infrared light-dark level conversion,
所述第一确定模块601确定所述序列图像中光点的变换方式,包括:The first determining module 601 determines the transformation mode of light points in the sequence image, including:
根据所述相同点的红外亮暗等级,得到所述相同点在一组序列图像中的红外亮暗等级变换。According to the infrared light-dark level of the same point, the infrared light-dark level transformation of the same point in a set of sequence images is obtained.
可选地,所述处理模块606基于所述光点,识别出所述序列图像中相邻帧的相同点,包括:Optionally, the processing module 606 recognizes the same points of adjacent frames in the sequence of images based on the light points, including:
获得所述序列图像中相邻帧的光点中心之间的距离;Obtaining the distance between the centers of light spots of adjacent frames in the sequence of images;
根据所述距离和预设距离阈值,识别出所述序列图像中相邻帧的相同点。According to the distance and a preset distance threshold, the same points of adjacent frames in the sequence of images are identified.
可选地,所述第一获得模块602根据所述光点的变换方式,获得所述序列图像中目标光点对应的标识,包括:Optionally, the first obtaining module 602 obtains the identifier corresponding to the target light point in the sequence image according to the conversion mode of the light point, including:
根据所述光点的变换方式,以及预设的变换方式与光点标识的对应关系,获得所述序列图像中目标光点对应的标识。According to the conversion mode of the light spot and the correspondence between the preset conversion mode and the light spot identification, the identification corresponding to the target light spot in the sequence image is obtained.
可选地,所述目标光点的数量不少于预设数量;Optionally, the number of the target light points is not less than a preset number;
所述第二获得模块604根据所述映射位置、所述目标光点的初始位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述控制器的位置和姿态,包括;The second obtaining module 604 obtains the position and posture of the controller according to the mapping position, the initial position of the target light spot, and the position of the image obtaining device during the movement of the controller, including;
根据所述映射位置、所述目标光点的初始位置,通过PnP算法,获得所述目标光点相对于所述图像获取装置的位置;Obtaining the position of the target light spot relative to the image acquisition device through a PnP algorithm according to the mapping position and the initial position of the target light spot;
根据所述目标光点相对于所述图像获取装置的位置和所述控制器在移动过程中所述图像获取装置的位置,获得所述目标光点的位置;Obtaining the position of the target light spot according to the position of the target light spot relative to the image acquisition device and the position of the image acquisition device during the movement of the controller;
根据所述目标光点的位置,获得所述控制器的位置和姿态。According to the position of the target light spot, the position and posture of the controller are obtained.
本申请实施例提供的装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本申请实施例此处不再赘述。The device provided in the embodiment of the present application can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details of the embodiments of the present application are not repeated here.
图13为本申请实施例提供的控制器的追踪设备的硬件结构示意图。如图13所示,本实施例的控制器的追踪设备80包括:处理器801以及存储器802;其中FIG. 13 is a schematic diagram of the hardware structure of the tracking device of the controller provided by an embodiment of the application. As shown in FIG. 13, the tracking device 80 of the controller in this embodiment includes: a processor 801 and a memory 802; wherein
存储器802,用于存储计算机执行指令;The memory 802 is used to store computer execution instructions;
处理器801,用于执行存储器存储的计算机执行指令,以实现上述实施例中控制器的追踪方法的各个步骤。具体可以参见前述方法实施例中的相关描述。The processor 801 is configured to execute computer-executable instructions stored in the memory to implement each step of the tracking method of the controller in the foregoing embodiment. For details, please refer to the relevant description in the foregoing method embodiment.
可选地,存储器802既可以是独立的,也可以跟处理器801集成在一起。Optionally, the memory 802 may be independent or integrated with the processor 801.
当存储器802独立设置时,该追踪设备还包括总线803,用于连接所述存储器802和处理器801。When the memory 802 is set independently, the tracking device further includes a bus 803 for connecting the memory 802 and the processor 801.
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上所述的控制器的追踪方法。The embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores computer-executable instructions. When the processor executes the computer-executed instructions, the tracking method of the controller as described above is implemented.
图14为本申请实施例提供的VR系统的结构示意图。如图14所示,本实施例的VR系统90包括:一体机901和控制器902。其中,一体机901设置有控制器的追踪处理器9011和图像获取装置9012等。控制器902携带有多点发光单元,多点发光单元包括多个光点,该控制器的一种可能的使用形态为手柄。控制器的追踪处理器9011被配置成执行上述方法。可选地,上述控制器上设置有IMU9021。本实施例提供的VR系统的实现原理和技术效果可见上述方法实施例,此处不再赘述。FIG. 14 is a schematic structural diagram of a VR system provided by an embodiment of the application. As shown in FIG. 14, the VR system 90 of this embodiment includes: an all-in-one machine 901 and a controller 902. Among them, the all-in-one machine 901 is provided with a tracking processor 9011 of a controller, an image acquisition device 9012 and the like. The controller 902 carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes a plurality of light points. A possible use form of the controller is a handle. The tracking processor 9011 of the controller is configured to perform the above-mentioned method. Optionally, an IMU9021 is provided on the aforementioned controller. The implementation principles and technical effects of the VR system provided in this embodiment can be seen in the foregoing method embodiments, and will not be repeated here.
本申请实施例还提供一种AR系统,包括:一体机和控制器。其中,一体机设置有控制器的追踪处理器和图像获取装置等。控制器携带有多点发光单元,多点发光单元包括多个光点,该控制器的一种可能的使用形态为手柄。可选地,上述控制器上设置有IMU。控制器的追踪处理器可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本申请实施例此处不再赘述。An embodiment of the present application also provides an AR system, including: an all-in-one machine and a controller. Among them, the all-in-one machine is provided with a tracking processor of the controller, an image acquisition device, and the like. The controller carries a multi-point light-emitting unit, the multi-point light-emitting unit includes a plurality of light points, and a possible use form of the controller is a handle. Optionally, an IMU is provided on the aforementioned controller. The tracking processor of the controller can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details are not repeated here in the embodiments of the present application.
本申请实施例还提供一种MR系统,包括:一体机和控制器。其中,一体机设置有控制器的追踪处理器和图像获取装置等。控制器携带有多点发光单元,多点发光单元包括多个光点,该控制器的一种可能的使用形态为手柄。可选地,上述控制器上设置有IMU。控制器的追踪处理器可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本申请实施例此处不再赘述。The embodiment of the present application also provides an MR system, including: an all-in-one machine and a controller. Among them, the all-in-one machine is provided with a tracking processor of the controller, an image acquisition device, and the like. The controller carries a multi-point light-emitting unit, the multi-point light-emitting unit includes a plurality of light points, and a possible use form of the controller is a handle. Optionally, an IMU is provided on the aforementioned controller. The tracking processor of the controller can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details are not repeated here in the embodiments of the present application.
本申请实施例还提供一种XR系统,包括:一体机和控制器。其中,一体机设置有控制器的追踪处理器和图像获取装置等。控制器携带有多点发光单元,多点发光单 元包括多个光点,该控制器的一种可能的使用形态为手柄。可选地,上述控制器上设置有IMU。控制器的追踪处理器可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本申请实施例此处不再赘述。An embodiment of the present application also provides an XR system, including: an all-in-one machine and a controller. Among them, the all-in-one machine is provided with a tracking processor of the controller, an image acquisition device, and the like. The controller carries a multi-point light-emitting unit, and the multi-point light-emitting unit includes multiple light points. A possible use form of the controller is a handle. Optionally, an IMU is provided on the aforementioned controller. The tracking processor of the controller can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the details are not repeated here in the embodiments of the present application.
虚拟现实头盔VR显示,在各行各业如教育培训、消防演练、虚拟驾驶、房地产等项目中具有广泛的应用。在VR场景中,用户经常需要输入一些信息实现VR交互。例如,输入账号、密码;或者在观看音频、视频的过程中,调节音量,调节画面大小等等。目前,VR领域内相关的交互方式主要有:Virtual reality helmet VR display has a wide range of applications in various industries such as education and training, fire drills, virtual driving, real estate and other projects. In VR scenarios, users often need to input some information to achieve VR interaction. For example, enter the account number and password; or adjust the volume and screen size while watching audio and video. At present, the relevant interactive methods in the VR field mainly include:
1)通过VR自带的触摸板进行交互,但由于在佩戴VR头盔时是进行盲操作,因此触摸板的位置不好确定,再者用户需要高举手臂操作,时间久了会感觉很累,特别是在冬季穿衣较多时尤其不便。1) Interaction is through the touchpad that comes with VR, but because it is blindly operated when wearing a VR helmet, the position of the touchpad is not easy to determine. Moreover, the user needs to raise his arm to operate, and it will feel very tired after a long time. It is especially inconvenient when you wear more clothes in winter.
2)头部运动悬停,外壳式虚拟现实头戴式显示设备的输入方式多依赖手机内的惯性传感单元,用户通过转动头部控制光标位置移动,当光标移动到欲选择的在选项上(如确定、返回、音乐、视频)悬停一定的时间(如3s或5s)作为选择确定操作,此方式虽然简单但有时无响应且时间的长短不好定义。若时间定义过短容易引起误操作,若时间定义过长操作效率低且易引起用户的误解、反感和不耐烦,用户体验很差。2) Head movement hovering, the input mode of the shell-type virtual reality head-mounted display device mostly depends on the inertial sensing unit in the mobile phone. The user controls the cursor position movement by turning the head, and when the cursor moves to the option to be selected (Such as confirm, return, music, video) hovering for a certain period of time (such as 3s or 5s) as a selection confirmation operation, although this method is simple, sometimes there is no response and the length of time is not well defined. If the time definition is too short, it is easy to cause misoperation. If the time definition is too long, the operation efficiency is low and it is easy to cause misunderstanding, disgust and impatientness of the user, and the user experience is very poor.
3)语音识别,此方式可简单有效的实现交互,但有时会出现歧义,语音语义识别效果差,特别是地域上的人文不同,每个地方的方言和标准话可能会存在一定的差异。而且此方式不适合聋哑人,有一定的局限性。3) Speech recognition. This method can be simple and effective to achieve interaction, but sometimes ambiguity occurs, and the effect of speech semantic recognition is poor, especially when the humanities of the region are different, and the dialects and standard dialects of each place may have certain differences. Moreover, this method is not suitable for deaf-mute people and has certain limitations.
4)借助控制键盘、鼠标等传统外设,这是目前VR设备进行文字输入运用较多的方式,但是键盘外设需要用户随时携带,若不在身边时需中断体验摘下头盔寻找,相当于给用户增加了负担,影响了体验。4) With the help of traditional peripherals such as keyboard and mouse, this is the most common way for VR devices to input text. However, the keyboard peripherals need to be carried by the user at any time. Users increase the burden and affect the experience.
5)通过传统的双目识别手势技术交互,此方式可视为一较好的交互方式,但手势操作有一定的限制,交互过程中有较强的针对性,而且该技术应用的还不够成熟,要想熟练操作VR系统非常困难,精度和灵活度比较差。同时,手臂要在Camera视野范围内长时间举着会非常累,用户体验差。5) Through the traditional binocular recognition gesture technology interaction, this method can be regarded as a better interaction method, but the gesture operation has certain restrictions, the interaction process has strong pertinence, and the application of this technology is not mature enough , It is very difficult to operate the VR system proficiently, and the accuracy and flexibility are relatively poor. At the same time, it is very tiring to hold the arm in the camera's field of view for a long time, and the user experience is poor.
6)在虚拟场景中玩游戏时,若有其它的操作需求调出菜单,如返回主界面想看看其它资源,需要当前游戏结束后才能返回。甚至有的玩家在玩游戏时不能很快直观的找到退出方式,不能一键直达调出主界面。6) When playing a game in a virtual scene, if there are other operation needs to bring up the menu, such as returning to the main interface to see other resources, you need to return after the current game is over. Even some players can't quickly and intuitively find a way to exit when playing games, and can't directly call up the main interface with one key.
当前的VR交互输入方式普遍存在灵敏度低、准确度低、响应差和操作不便的问题。Current VR interactive input methods generally have the problems of low sensitivity, low accuracy, poor response and inconvenient operation.
图15是相关技术提供的一种虚拟现实交互的场景图。在VR场景中,用户经常需要输入一些信息实现VR交互。如图15所示,该应用场景包括头盔11和外设设备12;其中, 用户头戴头盔11,头盔11可以在虚拟场景中渲染出VR界面,用户通过外设设备12可以与头盔11中显示的VR界面进行交互。例如,输入账号、密码;或者在观看音频、视频的过程中,调节音量,调节画面大小;或者在游戏过程中,外设设备在VR界面中相当于游戏中的一个道具,例如剑、枪等,用户通过外设设备输入信息给VR游戏界面,从而对VR游戏界面中的道具进行控制。Fig. 15 is a scene diagram of a virtual reality interaction provided by related technologies. In VR scenarios, users often need to input some information to achieve VR interaction. As shown in Figure 15, the application scenario includes a helmet 11 and a peripheral device 12; among them, the user wears the helmet 11, and the helmet 11 can render a VR interface in the virtual scene, and the user can display the VR interface in the virtual scene through the peripheral device 12 Interact with the VR interface. For example, enter the account number and password; or adjust the volume and screen size while watching audio and video; or during the game, the peripheral device is equivalent to a prop in the game in the VR interface, such as swords, guns, etc. , The user inputs information to the VR game interface through the peripheral device, thereby controlling the props in the VR game interface.
在另外一些场景中,用户还可以通过VR自带的触摸板、头部运动悬停、语音识别以及双目识别手势的方式来进行VR交互,但这些VR输入方式普遍存在操作不便,以及灵敏度不好、准确度不高、响应不好的问题。尤其是在支付、输入文字或登陆账号时输入密码操作不便。In other scenes, users can also interact with VR through VR’s built-in touchpad, head motion hovering, voice recognition, and binocular recognition gestures, but these VR input methods generally suffer from inconvenience and sensitivity. Good, low accuracy, poor response. In particular, it is inconvenient to input passwords when making payments, entering text, or logging in to an account.
针对上述问题,本申请实施例提供了一种虚拟现实的控制设备,该控制设备上的位置点与VR头盔渲染的VR界面上的位置点之间具有预设的映射关系,用户可以在该控制设备上进行触控操作,用户的触控操作具有位置信息,根据该触控位置信息以及预设的映射关系,就可以确定用户在VR界面中的操作位置,这对于用户来说,用户可以像平时操作电脑屏幕或者手机屏幕一样,操作很方便。In response to the above-mentioned problems, the embodiments of the present application provide a virtual reality control device. There is a preset mapping relationship between the position points on the control device and the position points on the VR interface rendered by the VR helmet. The user can control For touch operation on the device, the user’s touch operation has position information. According to the touch position information and the preset mapping relationship, the user’s operation position in the VR interface can be determined. For the user, the user can be like It is very convenient to operate as usual on a computer screen or a mobile phone screen.
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。The technical solution of the present application and how the technical solution of the present application solves the above technical problems will be described in detail below with specific embodiments. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. The embodiments of the present application will be described below in conjunction with the accompanying drawings.
图16为本申请实施例提供的一种虚拟现实的控制设备的控制逻辑图。本申请实施例针对相关技术的如上技术问题,提供了虚拟现实的控制设备,如图16所示,该虚拟现实的控制设备包括:触控界面21-1、控制器22-1和通信器23-1;FIG. 16 is a control logic diagram of a virtual reality control device provided by an embodiment of the application. The embodiments of the present application provide a virtual reality control device in response to the above technical problems of related technologies. As shown in FIG. 16, the virtual reality control device includes: a touch interface 21-1, a controller 22-1, and a communicator 23 -1;
其中,触控界面21-1,用于接收用户的触控操作信息。Among them, the touch interface 21-1 is used to receive user touch operation information.
控制器22-1,连接至触控界面21-1,用于基于触控操作信息,确定用户的触控操作和在触控界面上的操作位置信息。The controller 22-1, connected to the touch interface 21-1, is used to determine the user's touch operation and operation position information on the touch interface based on the touch operation information.
通信器23-1,连接至控制器22-1,用于将触控操作和操作位置信息发送至头盔的处理器,以使头盔的处理器根据控制设备和头盔的处理器渲染的VR界面上位置点的预设映射关系,以及操作位置信息,确定VR界面中的对应位置,并在对应位置处进行与触控操作相应的操作行为。其中,通信器的通信方式可以是有线通信,也可以是无线通信,本实施例对此不做具体限定。The communicator 23-1, connected to the controller 22-1, is used to send the touch operation and operating position information to the processor of the helmet, so that the processor of the helmet is displayed on the VR interface rendered by the control device and the processor of the helmet The preset mapping relationship of the position points and the operation position information determine the corresponding position in the VR interface, and perform the operation behavior corresponding to the touch operation at the corresponding position. The communication mode of the communicator may be wired communication or wireless communication, which is not specifically limited in this embodiment.
本实施例中,可以理解为控制设备上设置有触控界面,触控界面上具有多个触控位置点,并且头盔的处理器渲染的VR界面上也具有多个VR位置点,通过控制设备上的位置点与VR界面上的位置点之间的预设的映射关系,可以将触控界面上的位置点对应到VR 界面中,并且根据用户在触控界面上的操作行为,使VR头盔在VR界面中对应的位置处进行相应的操作。例如,如图17所示,触控界面21-1上的A位置点和VR界面上的B位置点对应,则若用户在A位置点进行了选中操作,那么VR界面上的B位置点也会执行选中操作。再例如,如图18所示,用户从触控界面21-1上的A1位置点沿着轨迹a1(如图18中触控界面中的弧线所示)移动到了A2位置点,若VR界面上的B1位置点与A1位置点对应,B2位置点与A2位置点对应,轨迹v1(如图18中VR界面中的弧线所示)对应a1,那么VR界面上也会执行从B1位置点沿着轨迹v1移动至B2位置点的操作。In this embodiment, it can be understood that a touch interface is provided on the control device, and the touch interface has multiple touch position points, and the VR interface rendered by the helmet processor also has multiple VR position points. The preset mapping relationship between the position points on the touch interface and the position points on the VR interface can map the position points on the touch interface to the VR interface, and according to the user's operation behavior on the touch interface, the VR helmet Perform the corresponding operation at the corresponding position in the VR interface. For example, as shown in Figure 17, the A position point on the touch interface 21-1 corresponds to the B position point on the VR interface. If the user selects the A position point, then the B position point on the VR interface is also The selected operation will be performed. For another example, as shown in Figure 18, the user moves from the A1 point on the touch interface 21-1 along the track a1 (shown by the arc in the touch interface in Figure 18) to the A2 point, if the VR interface The B1 position point on the above corresponds to the A1 position point, the B2 position point corresponds to the A2 position point, and the trajectory v1 (as shown by the arc in the VR interface in Figure 18) corresponds to a1, then the VR interface will also execute from the B1 position point The operation of moving along the trajectory v1 to the position B2.
在一些场景下,本申请实施例还可以将控制设备在虚拟现实场景中渲染出来。此种情况下,由于控制设备的姿态信息可能随时发生变化,为了实现对控制设备的姿态追踪,还可以在上述实施例的基础上,在控制设备中增加惯性测量单元24-1。如图19所示,惯性测量单元24-1连接至控制器22-1,惯性测量单元24-1用于测量控制设备的角速度信息,并发送至控制器;控制器22-1,还用于基于角速度信息计算得到控制设备的姿态信息;并将姿态信息发送至头盔的处理器,以使头盔的处理器基于姿态信息调整渲染的VR界面的姿态信息。In some scenarios, the embodiment of the present application may also render the control device in a virtual reality scene. In this case, since the posture information of the control device may change at any time, in order to realize the posture tracking of the control device, an inertial measurement unit 24-1 may be added to the control device on the basis of the foregoing embodiment. As shown in Figure 19, the inertial measurement unit 24-1 is connected to the controller 22-1. The inertial measurement unit 24-1 is used to measure the angular velocity information of the control device and send it to the controller; the controller 22-1 is also used to The posture information of the control device is calculated based on the angular velocity information; the posture information is sent to the processor of the helmet, so that the processor of the helmet adjusts the posture information of the rendered VR interface based on the posture information.
在一些场景下,控制设备的位置信息也可能随时发生变化,为了实现对控制设备的位置追踪,还可以使惯性测量单元24-1,用于测量控制设备的加速度信息,并发送至控制器22-1;以及使控制器22-1基于加速度信息计算得到控制设备的位置信息,之后将位置信息发送至头盔的处理器,以使头盔的处理器基于位置信息调整渲染的VR界面的位置信息。In some scenarios, the position information of the control device may also change at any time. In order to track the position of the control device, the inertial measurement unit 24-1 can also be used to measure the acceleration information of the control device and send it to the controller 22. -1; and make the controller 22-1 calculate the position information of the control device based on the acceleration information, and then send the position information to the processor of the helmet, so that the processor of the helmet adjusts the position information of the rendered VR interface based on the position information.
可选的,惯性测量单元24-1还可以同时采集控制设备的加速度信息和角速度信息,并发送至控制器22-1;以及使控制器22-1基于加速度信息计算得到控制设备的位置信息,和基于角速度信息计算得到控制设备的姿态信息;之后将位置信息和姿态信息共同发送至头盔的处理器,以使头盔的处理器基于位置信息调整渲染的VR界面的位置信息,并基于姿态信息调整渲染的VR界面的姿态。Optionally, the inertial measurement unit 24-1 can also collect acceleration information and angular velocity information of the control device at the same time, and send them to the controller 22-1; and make the controller 22-1 calculate the position information of the control device based on the acceleration information, And calculate the attitude information of the control device based on angular velocity information; then send the position information and attitude information to the processor of the helmet together, so that the processor of the helmet adjusts the position information of the rendered VR interface based on the position information, and adjusts based on the attitude information The pose of the rendered VR interface.
对于惯性测量单元24-1而言,能够测量控制设备的姿态角,姿态角包括横滚角(roll)、俯仰角(pitch)和偏航角(yaw)。其中,如图20所示,以飞机为例,以飞机上一点为原点,并以飞机机身长度所在方向为Y轴,在水平面内与Y轴垂直的为X轴,重力方向为Z轴,建立XYZ坐标系,则pitch角是指围绕X轴旋转产生的角度,yaw是指围绕Y轴旋转产生的角度,roll是指围绕Z轴旋转产生的角度。惯性测量单元包括陀螺仪、加速度计和磁力计;其中,陀螺仪,用于测量角速度,通过对角速度积分能够得到姿态角。然而由于积分过程中会产生误差,随着时间的增加,误差也会累积,最终导致出现姿态角偏差。因此,还可以通过加速度计测量控制设备的加速度和重力信息,因此,进而通过加速度信息 矫正与重力方向相关的姿态角偏差,即利用加速度信息可矫正横滚角(roll)、俯仰角(pitch)的角度偏差。而磁力计的测量数据可计算得到偏航角(yaw),进而矫正姿态信息。For the inertial measurement unit 24-1, the attitude angle of the control device can be measured, and the attitude angle includes a roll angle (roll), a pitch angle (pitch), and a yaw angle (yaw). Among them, as shown in Figure 20, taking an airplane as an example, taking a point on the airplane as the origin, and taking the length of the airplane's fuselage as the Y axis, the horizontal plane perpendicular to the Y axis as the X axis, and the gravity direction as the Z axis. To establish an XYZ coordinate system, the pitch angle refers to the angle produced by rotating around the X axis, yaw refers to the angle produced by rotating around the Y axis, and roll refers to the angle produced by rotating around the Z axis. The inertial measurement unit includes a gyroscope, an accelerometer, and a magnetometer; among them, the gyroscope is used to measure the angular velocity, and the attitude angle can be obtained by integrating the angular velocity. However, due to errors in the integration process, as time increases, errors will also accumulate, eventually leading to attitude angle deviations. Therefore, the acceleration and gravity information of the control device can also be measured by the accelerometer. Therefore, the acceleration information can be used to correct the attitude angle deviation related to the direction of gravity, that is, the acceleration information can be used to correct the roll and pitch angles. The angle deviation. The measured data of the magnetometer can be calculated to obtain the yaw angle (yaw), and then the posture information can be corrected.
本申请实施例的控制设备可以提供多种控制模式,在不同的控制模式下,用户可以进行不同的交互体验。The control device of the embodiment of the present application can provide multiple control modes, and in different control modes, users can have different interactive experiences.
在一种可选的实施方式中,控制设备对应有第一控制模式;则控制设备,还可以用于在检测到用户选择第一控制模式的情况下,获取用户在触控界面21-1上的触摸位置信息,并发送至头盔的处理器,以使头盔的处理器基于触摸位置信息,以及控制设备和头盔的处理器渲染的VR界面的位置点的预设映射关系,确定VR界面的对应位置,并将在虚拟场景中渲染出的VR界面上的光标移动至对应位置处。该实施方式中,头盔的处理器在虚拟场景中渲染出VR界面,用户可以在触控界面上进行触摸,就像是手指在手机屏幕上触摸,用户在触控界面上的触摸动作会产生触摸位置信息,用于表示用户在触控界面上的触摸位置点,根据该触摸位置点,以及触控界面的触控界面上的位置点和VR界面的位置点之间的预设映射关系,就可以确定出VR界面的对应位置点。In an optional implementation manner, the control device corresponds to the first control mode; the control device may also be used to obtain the user’s status on the touch interface 21-1 when it is detected that the user selects the first control mode. And send it to the processor of the helmet so that the processor of the helmet determines the correspondence of the VR interface based on the touch position information and the preset mapping relationship between the position points of the VR interface rendered by the control device and the processor of the helmet Position, and move the cursor on the VR interface rendered in the virtual scene to the corresponding position. In this embodiment, the processor of the helmet renders the VR interface in the virtual scene, and the user can touch on the touch interface, just like a finger touching on the screen of a mobile phone, and the user’s touch action on the touch interface will generate a touch. Location information is used to indicate the user's touch position point on the touch interface. According to the touch position point and the preset mapping relationship between the position point on the touch interface of the touch interface and the position point of the VR interface, The corresponding location point of the VR interface can be determined.
如图21所示,触控界面上具有多个触控位置点(如图中虚线方框所示)。如图22所示,当用户在多个触控位置点上触摸时,就可以形成触控操作信息发送至头盔,头盔根据该触控操作信息以及预设的映射关系,就可以确定出VR主界面上的对应位置。As shown in FIG. 21, there are multiple touch position points on the touch interface (as shown by the dashed box in the figure). As shown in Figure 22, when the user touches multiple touch position points, touch operation information can be formed and sent to the helmet. The helmet can determine the VR master based on the touch operation information and the preset mapping relationship. The corresponding position on the interface.
在上述实施例的基础上,用户还可以在触摸操作之后进行确认操作。此种情况下,则控制设备,还可以用于获取用户在触控界面上的确认按键信息,并发送至头盔的处理器,以使头盔的处理器对VR界面上的当前对象执行确认操作。可选的,确认操作可以是点击操作。例如,用户在触控界面上触摸,并停留在某一位置点处进行点击操作,则可以视为用户在触控界面上输入了确认按键信息。On the basis of the above-mentioned embodiment, the user can also perform a confirmation operation after the touch operation. In this case, the control device can also be used to obtain the user's confirmation button information on the touch interface, and send it to the processor of the helmet, so that the processor of the helmet performs a confirmation operation on the current object on the VR interface. Optionally, the confirmation operation may be a click operation. For example, if the user touches on the touch interface and stays at a certain point to perform a click operation, it can be regarded as the user inputting confirmation button information on the touch interface.
在第一控制模式下,头盔的处理器可以将待显示的VR界面渲染在虚拟场景中,此种情况下,触控界面与VR界面之间具有预设比例关系。其中,预设比例关系可以是1:1,可以是大于1:1,还可以小于1:1,本实施例对此不做具体限定。无论是哪种预设比例关系,都可以根据用户在控制设备上的触摸位置点坐标,以及该预设比例关系,将其映射到VR界面的对应位置处。In the first control mode, the processor of the helmet can render the VR interface to be displayed in a virtual scene. In this case, there is a preset proportional relationship between the touch interface and the VR interface. Wherein, the preset ratio relationship may be 1:1, may be greater than 1:1, and may also be less than 1:1, which is not specifically limited in this embodiment. Regardless of the preset ratio relationship, it can be mapped to the corresponding position of the VR interface according to the coordinates of the user's touch position on the control device and the preset ratio relationship.
第一控制模式对于用户而言,控制设备就像是一个鼠标一样,与鼠标的不同之处在于,用户是通过在控制设备上的触摸移动来实现VR界面位置点的移动,可以理解为用户在控制设备上的触摸移动相当于使用鼠标在移动一样。The first control mode is for the user, the control device is like a mouse. The difference from the mouse is that the user moves the position of the VR interface by touching the control device, which can be understood as The touch movement on the control device is equivalent to using a mouse to move.
在另一种可选的实施方式中,控制设备对应有第二控制模式;则控制设备,还可以用 于在检测到用户选择第二控制模式的情况下,获取用户在触控界面上的触控操作,并发送至头盔的处理器,以使头盔的处理器基于用户的触控操作,对对应位置处进行相应操作。In another optional implementation manner, the control device corresponds to a second control mode; the control device can also be used to obtain the user's touch on the touch interface when it is detected that the user selects the second control mode. The control operation is sent to the processor of the helmet, so that the processor of the helmet performs corresponding operations on the corresponding position based on the user's touch operation.
在该实施方式中,如图23所示,头盔的处理器渲染的VR界面位于触控界面上,当用户在触控界面上触控时,就像是在手机屏幕上触控一样。对于用户而言,就相当于直接在VR界面上进行操作一样,与第一控制模式不同之处在于,用户可以直接在任一触摸位置处进行点击操作,从而实现确认操作。对于佩戴VR头盔的其余用户而言,在控制设备上渲染的VR界面是不可见的,仅对佩戴VR头盔的用户可见,用户在输入账号、密码时,就会很安全。In this embodiment, as shown in FIG. 23, the VR interface rendered by the processor of the helmet is located on the touch interface. When the user touches on the touch interface, it is like touching on the screen of a mobile phone. For the user, it is equivalent to directly operating on the VR interface. The difference from the first control mode is that the user can directly perform a click operation at any touch position, thereby realizing a confirmation operation. For other users wearing VR helmets, the VR interface rendered on the control device is invisible, and is only visible to the user wearing the VR helmet. When the user enters the account and password, it will be very safe.
在第二控制模式下,为了使用户达到所见即所得的体验效果,可以使触控界面与VR界面呈1:1的预设比例关系。In the second control mode, in order to enable the user to achieve a WYSIWYG experience effect, the touch interface and the VR interface can be in a preset ratio of 1:1.
在第二控制模式下,由于VR界面是渲染在触控界面上,因此,为了保证VR界面能够随着触控界面位姿的变化而进行适应性变化,可以通过实时获取基于惯性测量单元测量的数据得到的位置信息和姿态信息,并基于控制设备的位置信息和姿态信息适应性调整VR界面的位置信息和姿态信息,使VR界面能够随着触控界面位姿的变化而进行相应变化。In the second control mode, since the VR interface is rendered on the touch interface, in order to ensure that the VR interface can adapt to changes in the pose of the touch interface, the measurement based on the inertial measurement unit can be obtained in real time. The position information and posture information obtained from the data are adapted to adjust the position information and posture information of the VR interface based on the position information and posture information of the control device, so that the VR interface can change correspondingly with the change of the position and posture of the touch interface.
为了进一步提升用户体验,还可以在上述两种控制模式下,设计首点触控操作,也就是说当检测到用户的触控操作时,在VR界面上显示出对应的光标,如箭头、圆圈、移动操作等,对于用户而言,具有很好的可视化效果。In order to further enhance the user experience, you can also design the first-point touch operation in the above two control modes, that is to say, when the user's touch operation is detected, the corresponding cursor, such as an arrow and a circle, will be displayed on the VR interface. , Mobile operations, etc., for users, it has a good visualization effect.
在上述两种控制模式的基础上,还可以对用户提供模式选择功能,例如,用户可以自由选择是使用第一控制模式还是第二控制模式。在一种可选的实施方式中,可以在控制设备上设置模式按键,用户通过对模式按键的操作选择使用第一控制模式或者第二控制模式。On the basis of the above two control modes, a mode selection function can also be provided to the user. For example, the user can freely choose whether to use the first control mode or the second control mode. In an optional implementation manner, a mode button may be set on the control device, and the user selects to use the first control mode or the second control mode by operating the mode button.
在用户通过对模式按键的操作选择使用第一控制模式或者第二控制模式的实施方式中,可以有以下几种可选的实施方式:In the implementation manner in which the user selects the first control mode or the second control mode by operating the mode button, there may be the following optional implementation manners:
在第一种可选的实施方式中,可以在控制设备上分别设置第一模式按键和第二模式按键,当用户按下第一模式按键,则表示用户选择使用第一控制模式;当用户按下第二模式按键,则表示用户选择使用第二控制模式。In the first optional implementation manner, the first mode button and the second mode button can be respectively set on the control device. When the user presses the first mode button, it means that the user chooses to use the first control mode; when the user presses Press the second mode button, it means that the user chooses to use the second control mode.
在第二种可选的实施方式中,可以在控制设备上设置一个模式按键,用户可以通过对该模式按键进行不同的操作来选择使用第一控制模式或者第二控制模式。示例性地,通过设定对模式按键的按下时间来确定用户是选择使用第一控制模式或者第二控制模式。例如,当用户长按该模式按键的情况下,则代表用户选择使用第一控制模式,当用户短按该模式按键的情况下,则代表用户选择使用第二控制模式。其中,长按和短按是相对而言,也就 是说,短按的时间短于长按。举例来说,若用户按下该模式按键,并随即松开,则代表用户选择第一控制模式,若用户长按模式按键超过5s,则代表用户选择第二控制模式。In a second optional implementation manner, a mode button can be provided on the control device, and the user can choose to use the first control mode or the second control mode by performing different operations on the mode button. Exemplarily, it is determined whether the user chooses to use the first control mode or the second control mode by setting the pressing time of the mode button. For example, when the user long presses the mode button, it means that the user chooses to use the first control mode, and when the user short presses the mode button, it means that the user chooses to use the second control mode. Among them, long press and short press are relative terms, that is, the time of short press is shorter than long press. For example, if the user presses the mode button and then releases it, it means that the user selects the first control mode. If the user presses the mode button for more than 5 seconds, it means that the user selects the second control mode.
在第三种可选的实施方式中,还可以采用语音控制的方式来选择使用第一控制模式或者第二控制模式。例如,用户通过发出语音指令“进入第一控制模式”,或者“进入第二控制模式”的形式进行语音控制。In a third optional implementation manner, voice control can also be used to select the first control mode or the second control mode. For example, the user performs voice control by issuing a voice command "enter the first control mode" or "enter the second control mode".
在第四种可选的实施方式中,还可以采用手势控制的方式来选择使用第一控制模式或者第二控制模式。例如,从下至上的提拉手势为打开第一控制模式;从左向右的提拉手势为打开第二控制模式。当然,也可以设置其他的手势来选择使用第一控制模式或者第二控制模式。本实施例再此不再一一赘述。In the fourth optional implementation manner, gesture control can also be used to select the first control mode or the second control mode. For example, a pulling gesture from bottom to top is to open the first control mode; a pulling gesture from left to right is to open the second control mode. Of course, other gestures can also be set to select the first control mode or the second control mode. This embodiment will not repeat them one by one again.
在上述实施例的基础上,为了对用户提供输入反馈,使用户有更好的触感,还可以在控制设备内设置一马达。该实施例中,控制器,还用于在接收到用户的触控操作信息的情况下,发送震动指令至马达;马达,连接至控制器,用于基于震动指令进行震动。在该实施例中,可以理解为当用户在触控界面上有输入操作的情况下,控制器给用户的输入以回馈,类似于在手机屏幕上进行点击操作的情况下进行震动的方式。这样,能够在虚拟场景下为用户提供很好的触感。On the basis of the foregoing embodiment, in order to provide input feedback to the user, so that the user has a better sense of touch, a motor can also be provided in the control device. In this embodiment, the controller is also used to send a vibration instruction to the motor when the user's touch operation information is received; the motor is connected to the controller and is used to vibrate based on the vibration instruction. In this embodiment, it can be understood that when the user has an input operation on the touch interface, the controller gives feedback to the user's input, similar to the way of vibrating in the case of a tap operation on the mobile phone screen. In this way, it can provide users with a good sense of touch in a virtual scene.
在上述两种控制模式中,触控界面可以采用电容触摸或红外触摸的方式。其中,电容触摸可以参见已有的电容触摸技术,此处不再赘述。对于红外触摸,如图24所示,在控制设备的一侧设置相间排列的红外发光二极管和光电二极管,控制设备还包括控制单元。图中的正方形或长方形虚线框所示的区域为可触摸区域。其中,发光二极管用于发射红外光,当有触摸物体在可触摸区域内触摸时,例如手指,则光电二极管就会接收到触摸物体的反射光,从而形成光网,根据红外光的发射与接收,以及控制单元内预先设置的算法进行计算来确定触摸点的位置。In the above two control modes, the touch interface can adopt capacitive touch or infrared touch. Among them, the capacitive touch can refer to the existing capacitive touch technology, which will not be repeated here. For infrared touch, as shown in FIG. 24, the infrared light-emitting diodes and photodiodes arranged alternately are arranged on one side of the control device, and the control device also includes a control unit. The area shown by the square or rectangular dashed box in the figure is a touchable area. Among them, the light-emitting diode is used to emit infrared light. When a touch object is touched in the touchable area, such as a finger, the photodiode will receive the reflected light of the touch object to form an optical network. According to the emission and reception of infrared light , And the algorithm preset in the control unit performs calculations to determine the location of the touch point.
如图25所示,控制单元会不断地扫描红外发射管和红外接收管,当人手在可触摸区域内触摸时,红外发光二极管发射的红外光遇到人手时,会有一部分光进行反射,此时采用一定角度的光电二极管接收由人手反射的红外光,然后根据发射部分和接收部分信号的变化,由控制单元分析计算来定位触摸点的位置。As shown in Figure 25, the control unit will continuously scan the infrared emitting tube and infrared receiving tube. When the human hand touches the touchable area, when the infrared light emitted by the infrared light-emitting diode meets the human hand, a part of the light will be reflected. A certain angle of photodiode is used to receive the infrared light reflected by the human hand, and then the control unit analyzes and calculates the position of the touch point according to the changes in the signal of the transmitting part and the receiving part.
本申请实施例中,头盔的处理器能够根据用户对触摸界面上触点的操作作出与之对应的同步响应。例如当用户在VR主界面选择观看的可带来沉浸感的视频,通过控制设备来调节音量的大小和进度的快慢等。在玩游戏时,调用投射界面选择资源等。对于用户而言,就如同在手机触屏上输入信息一样,符合用户自然习惯,并且无需训练可直接上手应用。In the embodiment of the present application, the processor of the helmet can make a synchronous response corresponding to the operation of the contact on the touch interface by the user. For example, when a user chooses to watch an immersive video on the main interface of VR, he controls the device to adjust the volume and the speed of the progress. When playing the game, call the projection interface to select resources and so on. For users, it is just like inputting information on the touch screen of a mobile phone, which conforms to the user's natural habits and can directly use the application without training.
在上述实施例提供的两种控制模式的基础上,本申请实施例还可以包括:检测是否退 出控制模式,若检测到用于欲退出控制模式,可再次长按模式按键退出控制模式。或者在控制模式状态下,长时间无触控操作可自动退出控制模式。On the basis of the two control modes provided by the foregoing embodiments, the embodiments of the present application may further include: detecting whether to exit the control mode, and if it is detected that the user wants to exit the control mode, long press the mode button again to exit the control mode. Or in the control mode, long time no touch operation can automatically exit the control mode.
上述实施例介绍了一种用于虚拟现实场景下进行VR交互的控制设备,以下将介绍一种用于虚拟现实场景下进行VR交互的头盔。The foregoing embodiment introduces a control device used for VR interaction in a virtual reality scene. The following will introduce a helmet used for VR interaction in a virtual reality scene.
图26是本申请实施例提供的一种头盔的控制逻辑示意图。Fig. 26 is a schematic diagram of the control logic of a helmet provided by an embodiment of the present application.
如图26所示,本申请实施例提供的头盔包括:处理器121-1,连接至控制设备,用于渲染VR界面,并在接收到用户在控制设备上的触控操作和操作位置信息的情况下,根据控制设备和VR界面上位置点的预设映射关系,以及接收到的控制设备上的操作位置信息,确定VR界面中的对应位置,并在对应位置处进行与触控操作行为相应的操作行为。As shown in FIG. 26, the helmet provided by the embodiment of the present application includes: a processor 121-1 connected to a control device for rendering the VR interface, and when receiving the user's touch operation and operation position information on the control device In this case, according to the preset mapping relationship between the control device and the position point on the VR interface, and the received operation position information on the control device, determine the corresponding position in the VR interface, and perform the corresponding touch operation behavior at the corresponding position Operation behavior.
在上述实施例的基础上,本申请实施例的处理器121-1,还用于获取控制设备的位置信息和姿态信息,基于位置信息和姿态信息,确定待显示的VR主界面的投射位置和姿态信息,以及基于待显示的VR主界面的投射位置和姿态信息,将待显示的VR主界面以确定的姿态信息投射至对应位置。On the basis of the foregoing embodiment, the processor 121-1 of the embodiment of the present application is also used to obtain the position information and posture information of the control device, and based on the position information and posture information, determine the projection position and the projection position of the VR main interface to be displayed. Posture information, and based on the projection position and posture information of the VR main interface to be displayed, the posture information determined by the VR main interface to be displayed is projected to the corresponding position.
本申请实施例中,对于控制设备的位姿追踪,可以采用如下至少两种实施方式:In the embodiments of the present application, for the pose tracking of the control device, at least two implementation manners as follows can be adopted:
在一种可选的实施方式中,请继续参阅图26,本申请实施例的头盔还包括:摄像头122-1,用于采集包括控制设备的图像;处理器121-1,连接至摄像头122-1,用于基于图像,确定控制设备的位置信息。本实施例中,控制设备的位置信息是指控制设备在世界坐标系下的位置信息,处理器基于摄像头采集的图像确定控制设备的位置信息,具体是采用图像处理的方式,首先确定控制设备在图像坐标系中的位置信息,然后基于图像坐标系和世界坐标系之间的转换关系,将图像坐标系中的位置信息转换至世界坐标系中,从而得到控制设备的位置信息。对于图像坐标系和世界坐标系之间的转换关系如何确定,可以参见相关技术的介绍,本实施例在此不再赘述。In an alternative implementation manner, please continue to refer to FIG. 26. The helmet of the embodiment of the present application further includes: a camera 122-1 for collecting images including control equipment; a processor 121-1 connected to the camera 122- 1. It is used to determine the location information of the control device based on the image. In this embodiment, the location information of the control device refers to the location information of the control device in the world coordinate system. The processor determines the location information of the control device based on the image collected by the camera. Specifically, the image processing method is adopted. First, it is determined that the control device is located The position information in the image coordinate system is then converted to the world coordinate system based on the conversion relationship between the image coordinate system and the world coordinate system, so as to obtain the position information of the control device. Regarding how to determine the conversion relationship between the image coordinate system and the world coordinate system, reference may be made to the introduction of related technologies, which will not be repeated in this embodiment.
在另一种可选的实施方式中,处理器121-1,还用于从控制设备获取控制设备的位置信息和姿态信息;控制设备的位置信息和姿态信息是分别基于控制设备的加速度信息和角速度信息计算得到。本实施例中,是由控制设备的惯性测量单元测量得到的加速度信息和角速度信息分别进行积分得到位置信息和姿态信息,然后通过控制设备的通信器发送至头盔的处理器。In another optional implementation manner, the processor 121-1 is further configured to obtain the position information and attitude information of the control device from the control device; the position information and attitude information of the control device are based on the acceleration information and the attitude information of the control device, respectively. The angular velocity information is calculated. In this embodiment, the acceleration information and angular velocity information measured by the inertial measurement unit of the control device are respectively integrated to obtain position information and posture information, and then sent to the processor of the helmet through the communicator of the control device.
对应于控制设备的第一控制模式,处理器121-1,还用于在虚拟场景中渲染VR界面,并在接收到用户在控制设备上的触摸位置信息的情况下,基于触摸位置信息,以及控制设备和处理器121-1渲染的VR界面的位置点的预设映射关系,确定VR界面的对应位置,和将在虚拟场景中渲染出的VR界面上的光标移动至对应位置处。Corresponding to the first control mode of the control device, the processor 121-1 is also used to render the VR interface in the virtual scene, and in the case of receiving the user's touch position information on the control device, based on the touch position information, and The preset mapping relationship between the control device and the position point of the VR interface rendered by the processor 121-1 determines the corresponding position of the VR interface, and moves the cursor on the VR interface rendered in the virtual scene to the corresponding position.
在第一控制模式的基础上,处理器121-1,还用于在接收用户在控制设备上的确认按键信息的情况下,对VR界面上的当前对象执行确认操作。On the basis of the first control mode, the processor 121-1 is further configured to perform a confirmation operation on the current object on the VR interface in the case of receiving the confirmation key information of the user on the control device.
对应于控制设备的第二控制模式,处理器121-1,还用于将VR界面渲染在触控界面上,并在接收到用户在触控界面上的触控操作的情况下,根据用户的触控操作,在VR界面上的相应位置进行相应操作。Corresponding to the second control mode of the control device, the processor 121-1 is also used to render the VR interface on the touch interface, and in the case of receiving the user’s touch operation on the touch interface, according to the user’s Touch operation, perform corresponding operations at the corresponding position on the VR interface.
对于第一控制模式和第二控制模式的具体实施方式可以参见控制设备部分的介绍,此处不再赘述。此外,控制设备部分对头盔的功能的介绍,同样可以适用于本实施例的头盔,本实施例在此不再赘述。For the specific implementations of the first control mode and the second control mode, please refer to the introduction of the control device part, which will not be repeated here. In addition, the introduction of the function of the helmet by the control device part can also be applied to the helmet of this embodiment, and this embodiment will not be repeated here.
在本申请的另一实施例中,还可以提供一种虚拟现实的交互系统,如图27所示,该交互系统包括一种虚拟现实的控制设备131和一种头盔132,其中,控制设备131和头盔132可以参见前述实施例的介绍,此处不再赘述。In another embodiment of the present application, a virtual reality interactive system may also be provided. As shown in FIG. 27, the interactive system includes a virtual reality control device 131 and a helmet 132, wherein the control device 131 For the helmet 132, please refer to the introduction of the foregoing embodiment, which will not be repeated here.
图28为本申请实施例提供的虚拟现实的交互方法流程图。在上述实施例的基础上,如图28所示,本实施例提供的虚拟现实的交互方法具体包括如下步骤:FIG. 28 is a flowchart of a virtual reality interaction method provided by an embodiment of the application. On the basis of the foregoing embodiment, as shown in FIG. 28, the virtual reality interaction method provided by this embodiment specifically includes the following steps:
步骤1401、接收用户的触控操作信息。Step 1401: Receive touch operation information of the user.
本实施例的执行主体可以是上述实施例中的控制设备。其中,用户的触控操作信息是指在控制设备上的触控操作信息。The execution subject of this embodiment may be the control device in the foregoing embodiment. Among them, the user's touch operation information refers to the touch operation information on the control device.
步骤1402、基于触控操作信息,确定用户的触控操作和在控制设备上的操作位置信息。Step 1402, based on the touch operation information, determine the user's touch operation and the operation position information on the control device.
其中,触控操作是指触控动作,例如移动、滑动、触摸、点击等操作行为,操作位置信息是指触控动作在控制设备上的位置信息。Among them, touch operation refers to touch actions, such as operation behaviors such as moving, sliding, touching, and clicking, and operation position information refers to position information of the touch action on the control device.
步骤1403、将触控操作和操作位置信息发送至头盔的处理器,以使头盔的处理器根据控制设备和VR界面上位置点的预设映射关系,以及操作位置信息,确定VR界面中的对应位置,并在对应位置处进行与触控操作行为相应的操作行为。Step 1403: Send the touch operation and operation position information to the processor of the helmet, so that the processor of the helmet determines the correspondence in the VR interface according to the preset mapping relationship between the control device and the position point on the VR interface, and the operation position information Position, and perform the operation behavior corresponding to the touch operation behavior at the corresponding position.
可选的,本申请实施例的方法还包括:测量控制设备的角速度信息;基于角速度信息计算得到控制设备的姿态信息;将姿态信息发送至头盔的处理器,以使头盔的处理器基于姿态信息调整渲染的VR界面的姿态信息。Optionally, the method of the embodiment of the present application further includes: measuring the angular velocity information of the control device; calculating the attitude information of the control device based on the angular velocity information; sending the attitude information to the processor of the helmet, so that the processor of the helmet is based on the attitude information Adjust the posture information of the rendered VR interface.
可选的,本申请实施例的方法还包括:测量控制设备的加速度信息;基于加速度信息计算得到控制设备的位置信息;将位置信息发送至头盔的处理器,以使头盔的处理器基于位置信息调整渲染的VR界面的位置信息。Optionally, the method of the embodiment of the present application further includes: measuring the acceleration information of the control device; calculating the position information of the control device based on the acceleration information; sending the position information to the processor of the helmet, so that the processor of the helmet is based on the position information Adjust the position information of the rendered VR interface.
可选的,控制设备对应有第一控制模式;本申请实施例的方法还包括:在检测到用户选择第一控制模式的情况下,获取用户在触控界面上的触摸位置信息,并发送至头盔的处理器,以使头盔的处理器基于触摸位置信息,以及控制设备和头盔的处理器渲染的VR界 面的位置点的预设映射关系,确定VR界面的对应位置,并将在虚拟场景中渲染出的VR界面上的光标移动至对应位置处。Optionally, the control device corresponds to a first control mode; the method in this embodiment of the present application further includes: in a case where it is detected that the user selects the first control mode, acquiring the user's touch position information on the touch interface, and sending it to The processor of the helmet, so that the processor of the helmet can determine the corresponding position of the VR interface based on the touch position information and the preset mapping relationship between the position points of the VR interface rendered by the control device and the processor of the helmet, and set it in the virtual scene The cursor on the rendered VR interface moves to the corresponding position.
可选的,本申请实施例的方法还包括:在检测到用户选择第一控制模式的情况下,获取用户在触控界面上的确认按键信息,并发送至头盔的处理器,以使头盔的处理器对VR界面上的当前对象执行确认操作。Optionally, the method of the embodiment of the present application further includes: in the case of detecting that the user selects the first control mode, acquiring the user's confirmation button information on the touch interface, and sending it to the processor of the helmet, so that the helmet The processor performs a confirmation operation on the current object on the VR interface.
可选的,在第一控制模式下,触控界面与VR界面呈预设比例关系。Optionally, in the first control mode, the touch interface and the VR interface are in a preset proportional relationship.
可选的,控制设备对应有第二控制模式;本申请实施例的方法还包括:在检测到用户选择第二控制模式的情况下,获取用户在触控界面上的触控操作,并发送至头盔的处理器,以使头盔的处理器基于用户的触控操作,对对应位置处进行相应操作。Optionally, the control device corresponds to a second control mode; the method in this embodiment of the present application further includes: in the case of detecting that the user selects the second control mode, acquiring the user's touch operation on the touch interface, and sending it to The processor of the helmet enables the processor of the helmet to perform corresponding operations on the corresponding position based on the user's touch operation.
可选的,在第二控制模式下,触控界面与VR界面呈预设比例关系,预设比例关系为1:1。Optionally, in the second control mode, the touch interface and the VR interface are in a preset ratio relationship, and the preset ratio relationship is 1:1.
可选的,控制设备对应有多种控制模式;本申请实施例的方法还包括:接收用户对多种控制模式的选择信息,并基于选择信息开启对应的控制模式;选择信息至少包括如下中至少一项:模式按键选择、语音指令选择和手势选择。Optionally, the control device corresponds to multiple control modes; the method in this embodiment of the present application further includes: receiving user selection information for multiple control modes, and enabling the corresponding control mode based on the selection information; the selection information includes at least one of the following One: mode button selection, voice command selection and gesture selection.
可选的,本申请实施例的方法还包括:在接收到用户的触控操作信息的情况下,发送震动指令至马达,控制马达基于震动指令进行震动。Optionally, the method of the embodiment of the present application further includes: in the case of receiving the user's touch operation information, sending a vibration instruction to the motor, and controlling the motor to vibrate based on the vibration instruction.
图28所示实施例的虚拟现实的交互方法可用于执行上述控制设备实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The virtual reality interaction method of the embodiment shown in FIG. 28 can be used to implement the technical solutions of the foregoing control device embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
图29为本申请另一实施例提供的虚拟现实的交互方法流程图。在上述实施例的基础上,如图29所示,本实施例提供的虚拟现实的交互方法具体包括如下步骤:FIG. 29 is a flowchart of a virtual reality interaction method provided by another embodiment of this application. On the basis of the foregoing embodiment, as shown in FIG. 29, the virtual reality interaction method provided by this embodiment specifically includes the following steps:
步骤1501、渲染VR界面。Step 1501: Render the VR interface.
本实施例的执行主体可以为上述实施例的头盔。其中,由头盔渲染VR界面。The execution subject of this embodiment may be the helmet of the above embodiment. Among them, the VR interface is rendered by the helmet.
步骤1502、在接收到用户在控制设备上的触控操作和操作位置信息的情况下,根据控制设备和VR界面上位置点的预设映射关系,以及接收到的控制设备上的操作位置信息,确定VR界面中的对应位置。Step 1502, in the case of receiving the user's touch operation and operation position information on the control device, according to the preset mapping relationship between the control device and the position point on the VR interface, and the received operation position information on the control device, Determine the corresponding position in the VR interface.
步骤1503、在对应位置处进行与触控操作行为相应的操作行为。Step 1503: Perform an operation behavior corresponding to the touch operation behavior at the corresponding position.
其中,对于步骤1502和步骤1503的具体实施过程可以参见前述实施例的介绍,此处不再赘述。For the specific implementation process of step 1502 and step 1503, reference may be made to the introduction of the foregoing embodiment, which will not be repeated here.
可选的,本申请实施例的方法还包括:获取控制设备的位置信息和姿态信息;基于位置信息和姿态信息,确定待显示的VR主界面的投射位置和姿态信息;基于待显示的VR主界面的投射位置和姿态信息,将待显示的VR主界面以确定的姿态信息投射至对应位置。Optionally, the method of the embodiment of the present application further includes: acquiring position information and posture information of the control device; determining the projection position and posture information of the VR main interface to be displayed based on the position information and posture information; and based on the VR main interface to be displayed The projection position and posture information of the interface, the posture information determined by the VR main interface to be displayed is projected to the corresponding position.
可选的,本申请实施例的方法还包括:采集包括控制设备的图像;基于图像,确定控制设备的位置信息。Optionally, the method of the embodiment of the present application further includes: collecting an image including the control device; and determining the location information of the control device based on the image.
可选的,本申请实施例的方法还包括:从控制设备获取控制设备的位置信息和姿态信息;控制设备的位置信息和姿态信息是分别基于控制设备的加速度信息和角速度信息计算得到。Optionally, the method of the embodiment of the present application further includes: acquiring position information and attitude information of the control device from the control device; the position information and attitude information of the control device are calculated based on the acceleration information and angular velocity information of the control device, respectively.
可选的,本申请实施例的方法还包括:在虚拟场景中渲染VR界面,并在接收到用户在控制设备上的触摸位置信息的情况下,基于触摸位置信息,以及控制设备和头盔的处理器渲染的VR界面的位置点的预设映射关系,确定VR界面的对应位置,和将在虚拟场景中渲染出的VR界面上的光标移动至对应位置处。Optionally, the method of the embodiment of the present application further includes: rendering the VR interface in the virtual scene, and in the case of receiving the user's touch position information on the control device, based on the touch position information, and the processing of the control device and the helmet The preset mapping relationship of the position points of the VR interface rendered by the monitor is determined, the corresponding position of the VR interface is determined, and the cursor on the VR interface rendered in the virtual scene is moved to the corresponding position.
可选的,本申请实施例的方法还包括:在接收用户在控制设备上的确认按键信息的情况下,对VR界面上的当前对象执行确认操作。Optionally, the method of the embodiment of the present application further includes: in the case of receiving the confirmation key information of the user on the control device, performing a confirmation operation on the current object on the VR interface.
可选的,本申请实施例的方法还包括:将VR界面渲染在触控界面上,并在接收到用户在触摸界面上的触控操作的情况下,根据用户的触控操作,在VR界面上的相应位置进行相应操作。Optionally, the method of the embodiment of the present application further includes: rendering the VR interface on the touch interface, and in the case of receiving the user's touch operation on the touch interface, according to the user's touch operation, in the VR interface Perform the corresponding operation on the corresponding position.
图29所示实施例的虚拟现实的交互方法可用于执行上述头盔实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The virtual reality interaction method of the embodiment shown in FIG. 29 can be used to implement the technical solution of the foregoing helmet embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
图30为本申请实施例提供的一种虚拟现实的控制设备的结构示意图。本申请实施例提供的一种虚拟现实的控制设备可以执行如图28所示的一种虚拟现实的控制方法实施例提供的处理流程,如图30所示,一种虚拟现实的控制设备160包括:存储器161、处理器162、计算机程序和通讯接口163;其中,计算机程序存储在存储器161中,并被配置为由处理器162执行以上图28所示的虚拟现实的控制方法实施例提供的处理流程。FIG. 30 is a schematic structural diagram of a virtual reality control device provided by an embodiment of the application. A virtual reality control device provided by an embodiment of the present application can execute the processing flow provided in an embodiment of a virtual reality control method as shown in FIG. 28. As shown in FIG. 30, a virtual reality control device 160 includes : A memory 161, a processor 162, a computer program, and a communication interface 163; wherein the computer program is stored in the memory 161 and is configured to be executed by the processor 162 to execute the processing provided by the embodiment of the virtual reality control method shown in FIG. 28 Process.
图30所示实施例的虚拟现实的控制设备可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The virtual reality control device of the embodiment shown in FIG. 30 can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
图31为本申请实施例提供的一种虚拟现实的控制设备的结构示意图。本申请实施例提供的一种虚拟现实的控制设备可以执行如图29所示的一种虚拟现实的控制方法实施例提供的处理流程,如图31所示,一种虚拟现实的控制设备170包括:存储器171、处理器172、计算机程序和通讯接口173;其中,计算机程序存储在存储器171中,并被配置为由处理器172执行以上图29所示的虚拟现实的控制方法实施例提供的处理流程。FIG. 31 is a schematic structural diagram of a virtual reality control device provided by an embodiment of this application. A virtual reality control device provided by an embodiment of the present application can execute the processing flow provided in an embodiment of a virtual reality control method as shown in FIG. 29. As shown in FIG. 31, a virtual reality control device 170 includes : A memory 171, a processor 172, a computer program, and a communication interface 173; wherein the computer program is stored in the memory 171 and is configured to be executed by the processor 172 to perform the processing provided by the embodiment of the virtual reality control method shown in FIG. 29 Process.
图31所示实施例的虚拟现实的控制设备可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。The virtual reality control device of the embodiment shown in FIG. 31 can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
另外,本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行以实现上述图28所示实施例的虚拟现实的交互方法。In addition, an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the virtual reality interaction method of the embodiment shown in FIG. 28.
另外,本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行以实现上述图29所示实施例的虚拟现实的交互方法。In addition, an embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the virtual reality interaction method of the embodiment shown in FIG. 29.
在本申请实施例中,上述各实施例之间可以相互参考和借鉴,相同或相似的步骤以及名词均不再一一赘述。In the embodiments of the present application, the above-mentioned embodiments can refer to each other and learn from each other, and the same or similar steps and nouns will not be repeated one by one.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发申请中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that the various forms of processes shown above can be used to reorder, add or delete steps. For example, the steps described in the present application can be performed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present application can be achieved, this is not limited herein.
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。The foregoing specific implementations do not constitute a limitation on the protection scope of the present application. Those skilled in the art should understand that various modifications, combinations, sub-combinations and substitutions can be made according to design requirements and other factors. Any modification, equivalent replacement and improvement made within the spirit and principle of this application shall be included in the protection scope of this application.

Claims (10)

  1. 一种基于虚拟现实的控制器光球追踪方法,其特征在于,所述方法包括:A controller photosphere tracking method based on virtual reality, characterized in that, the method includes:
    根据光球的前一个位置点的第一姿态信息,确定与所述前一个位置点相邻的后一个位置点的第二姿态信息;Determine, according to the first posture information of the previous location point of the photosphere, the second posture information of the next location point adjacent to the previous location point;
    根据所述前一个位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定所述后一个位置点的第二位置信息;Determine the second location information of the next location point according to the first location information, the first posture information, and the second posture information of the previous location point;
    根据所述第二位置信息,生成并输出与所述控制器对应的虚拟目标的当前显示位置。According to the second position information, the current display position of the virtual target corresponding to the controller is generated and output.
  2. 根据权利要求1所述的方法,其特征在于,根据所述前一个位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定所述后一个位置点的第二位置信息,包括:The method according to claim 1, wherein the second position of the latter position point is determined according to the first position information, the first posture information and the second posture information of the previous position point. Location information, including:
    根据所述第一姿态信息,确定所述光球位于所述前一个位置点时的第一预测位置,其中,所述第一预测位置表征所述光球位于所述前一个位置点时,相对于初始位置点的位置;According to the first posture information, the first predicted position when the photosphere is located at the previous position point is determined, wherein the first predicted position indicates that when the photosphere is located at the previous position point, relative The position of the initial position point;
    根据所述第二姿态信息,确定所述光球位于所述后一个位置点时的第二预测位置,其中,所述第二预测位置表征所述光球位于所述后一个位置点时,相对于初始位置点的位置;According to the second posture information, a second predicted position when the photosphere is located at the latter position point is determined, wherein the second predicted position indicates that when the photosphere is located at the latter position point, relative The position of the initial position point;
    根据所述第二预测位置和所述第一预测位置,确定所述光球的移动位移,其中,所述移动位移表征所述光球从所述前一个位置点移动至所述后一个位置点的位移;According to the second predicted position and the first predicted position, the movement displacement of the photosphere is determined, wherein the movement displacement represents the movement of the photosphere from the previous position point to the latter position point Displacement
    根据所述移动位移和所述第一位置信息,确定所述第二位置信息。The second position information is determined according to the movement displacement and the first position information.
  3. 根据权利要求2所述的方法,其特征在于,根据所述第一姿态信息,确定所述光球位于所述前一个位置点时的第一预测位置,包括:The method according to claim 2, wherein, according to the first posture information, determining the first predicted position when the photosphere is located at the previous position point comprises:
    根据所述第一姿态信息和预设的骨关节模型,确定所述第一预测位置,其中,所述骨关节模型用于指示人体关节的移动关系;Determining the first predicted position according to the first posture information and a preset bone joint model, wherein the bone joint model is used to indicate the movement relationship of the human joints;
    根据所述第二姿态信息,确定所述光球位于所述后一个位置点时的第二预测位置,包括:According to the second posture information, determining the second predicted position when the photosphere is located at the latter position point includes:
    根据所述第二姿态信息和所述骨关节模型,确定所述第二预测位置。The second predicted position is determined according to the second posture information and the bone joint model.
  4. 根据权利要求3所述的方法,其特征在于,所述骨关节模型中包括预设的移动半径;根据所述第一姿态信息和预设的骨关节模型,确定所述第一预测位置,包括:The method according to claim 3, wherein the bone joint model includes a preset moving radius; and determining the first predicted position according to the first posture information and the preset bone joint model includes :
    根据所述第一姿态信息、所述移动半径、预设的第一移动时间,确定所述第一预测位置,其中,所述第一移动时间为所述光球从所述初始位置点移动至所述前一个位置点所需的时间;The first predicted position is determined according to the first posture information, the movement radius, and the preset first movement time, where the first movement time is the movement of the photosphere from the initial position point to The time required for the previous location point;
    根据所述第二姿态信息和所述骨关节模型,确定所述第二预测位置,包括:The determining the second predicted position according to the second posture information and the bone joint model includes:
    根据所述第二姿态信息、所述移动半径、预设的第二移动时间,确定所述第二预测位 置,其中,所述第二移动时间为所述光球从所述初始位置点移动至所述后一个位置点所需的时间。The second predicted position is determined according to the second posture information, the movement radius, and the preset second movement time, where the second movement time is the movement of the photosphere from the initial position point to The time required for the latter location point.
  5. 根据权利要求1所述的方法,其特征在于,根据光球的前一个位置点的第一姿态信息,确定与所述前一个位置点相邻的后一个位置点的第二姿态信息,包括:The method according to claim 1, wherein, according to the first posture information of the previous position point of the photosphere, determining the second posture information of the next position point adjacent to the previous position point comprises:
    获取惯性测量单元所检测到的姿态数据;Obtain the posture data detected by the inertial measurement unit;
    根据所述第一姿态信息、所述姿态数据以及预设的移动时间,确定所述第二姿态信息,其中,所述移动时间为所述光球从所述前一个位置点移动至所述后一个位置点所需的时间。The second posture information is determined according to the first posture information, the posture data, and the preset movement time, wherein the movement time is the movement of the light ball from the previous position to the back The time required for a location.
  6. 根据权利要求5所述的方法,其特征在于,根据所述第一姿态信息、所述姿态数据以及预设的移动时间,确定所述第二姿态信息,包括:The method according to claim 5, wherein determining the second posture information according to the first posture information, the posture data, and a preset movement time comprises:
    根据所述姿态数据和所述移动时间,确定移动角度;Determine a movement angle according to the posture data and the movement time;
    根据所述移动角度和所述第一姿态信息,确定所述第二姿态信息。The second posture information is determined according to the movement angle and the first posture information.
  7. 根据权利要求5所述的方法,其特征在于,所述姿态数据为以下的任意一种:旋转角速度、重力加速度、偏航角、俯仰角。The method according to claim 5, wherein the attitude data is any one of the following: rotation angular velocity, gravitational acceleration, yaw angle, and pitch angle.
  8. 根据权利要求1-7任一项所述的方法,其特征在于,在根据光球的前一个位置点的第一姿态信息,确定与所述前一个位置点相邻的后一个位置点的第二姿态信息之前,所述方法还包括:The method according to any one of claims 1-7, characterized in that, according to the first posture information of the previous position point of the photosphere, the second position point of the next position point adjacent to the previous position point is determined Before the second posture information, the method further includes:
    获取光球的前一个位置点的第一位置信息和第一姿态信息;Acquiring the first position information and the first posture information of the previous position point of the photosphere;
    其中,获取光球的前一个位置点的第一位置信息,包括:Wherein, obtaining the first position information of the previous position point of the photosphere includes:
    获取图像,其中,所述图像为所述光球位于所述前一个位置点时采集单元所采集的图像;Acquiring an image, where the image is an image collected by a collecting unit when the photosphere is located at the previous position point;
    根据所述图像,确定所述光球在所述图像中的位置,以得到所述第一位置信息。According to the image, the position of the photosphere in the image is determined to obtain the first position information.
  9. 一种虚拟现实设备,所述虚拟现实设备包括:A virtual reality device, which includes:
    显示屏,所述显示屏用于显示图像;A display screen, which is used to display images;
    处理器,所述处理器被配置为:A processor, the processor is configured to:
    根据控制器上光球的前一位置点的第一姿态信息,确定与所述前一位置点相邻的后一个位置点的第二姿态信息;Determining, according to the first posture information of the previous position point of the photosphere on the controller, the second posture information of the next position point adjacent to the previous position point;
    根据前一位置点的第一位置信息、所述第一姿态信息和所述第二姿态信息,确定后一个位置点的第二位置信息;Determine the second location information of the next location point according to the first location information of the previous location point, the first posture information, and the second posture information;
    根据所述第二位置信息,确定控制器的位置,进而实现画面的显示。According to the second location information, the location of the controller is determined, and then the screen is displayed.
  10. 根据权利要求9所述的虚拟现实设备,其特征在于,The virtual reality device according to claim 9, wherein:
    所述处理器被配置为:The processor is configured to:
    接收控制器发送的姿态数据,根据所述第一姿态信息、所述姿态数据确定所述第二姿态信息。Receive the posture data sent by the controller, and determine the second posture information according to the first posture information and the posture data.
PCT/CN2021/081910 2020-03-27 2021-03-19 Virtual reality-based controller light ball tracking method on and virtual reality device WO2021190421A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202010230449.2 2020-03-27
CN202010230449.2A CN113516681A (en) 2020-03-27 2020-03-27 Controller light ball tracking method based on virtual reality and virtual reality equipment
CN202010226710.1 2020-03-27
CN202010226710.1A CN111427452B (en) 2020-03-27 2020-03-27 Tracking method of controller and VR system
CN202010246509.XA CN113467625A (en) 2020-03-31 2020-03-31 Virtual reality control device, helmet and interaction method
CN202010246509.X 2020-03-31

Publications (1)

Publication Number Publication Date
WO2021190421A1 true WO2021190421A1 (en) 2021-09-30

Family

ID=77891783

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081910 WO2021190421A1 (en) 2020-03-27 2021-03-19 Virtual reality-based controller light ball tracking method on and virtual reality device

Country Status (1)

Country Link
WO (1) WO2021190421A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167979A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine
CN114167979B (en) * 2021-11-18 2024-04-26 玩出梦想(上海)科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107116A1 (en) * 2015-12-24 2017-06-29 中国科学院深圳先进技术研究院 Navigation system for minimally invasive operation
CN107820593A (en) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 A kind of virtual reality exchange method, apparatus and system
CN108236782A (en) * 2017-12-26 2018-07-03 青岛小鸟看看科技有限公司 Localization method and device, the virtual reality device and system of external equipment
CN108267715A (en) * 2017-12-26 2018-07-10 青岛小鸟看看科技有限公司 Localization method and device, the virtual reality device and system of external equipment
CN108664122A (en) * 2018-04-04 2018-10-16 歌尔股份有限公司 A kind of attitude prediction method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107116A1 (en) * 2015-12-24 2017-06-29 中国科学院深圳先进技术研究院 Navigation system for minimally invasive operation
CN107820593A (en) * 2017-07-28 2018-03-20 深圳市瑞立视多媒体科技有限公司 A kind of virtual reality exchange method, apparatus and system
CN108236782A (en) * 2017-12-26 2018-07-03 青岛小鸟看看科技有限公司 Localization method and device, the virtual reality device and system of external equipment
CN108267715A (en) * 2017-12-26 2018-07-10 青岛小鸟看看科技有限公司 Localization method and device, the virtual reality device and system of external equipment
CN108664122A (en) * 2018-04-04 2018-10-16 歌尔股份有限公司 A kind of attitude prediction method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167979A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine
CN114167979B (en) * 2021-11-18 2024-04-26 玩出梦想(上海)科技有限公司 Handle tracking algorithm of augmented reality all-in-one machine

Similar Documents

Publication Publication Date Title
US11625103B2 (en) Integration of artificial reality interaction modes
CN110308789B (en) Method and system for mixed reality interaction with peripheral devices
EP3250983B1 (en) Method and system for receiving gesture input via virtual control objects
US10249090B2 (en) Robust optical disambiguation and tracking of two or more hand-held controllers with passive optical and inertial tracking
US9972136B2 (en) Method, system and device for navigating in a virtual reality environment
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
US8593402B2 (en) Spatial-input-based cursor projection systems and methods
US9207773B1 (en) Two-dimensional method and system enabling three-dimensional user interaction with a device
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
US20150220158A1 (en) Methods and Apparatus for Mapping of Arbitrary Human Motion Within an Arbitrary Space Bounded by a User's Range of Motion
KR102147430B1 (en) virtual multi-touch interaction apparatus and method
US20140009384A1 (en) Methods and systems for determining location of handheld device within 3d environment
WO2017057106A1 (en) Input device, input method, and program
US9310851B2 (en) Three-dimensional (3D) human-computer interaction system using computer mouse as a 3D pointing device and an operation method thereof
US9201519B2 (en) Three-dimensional pointing using one camera and three aligned lights
KR100532525B1 (en) 3 dimensional pointing apparatus using camera
US9678583B2 (en) 2D and 3D pointing device based on a passive lights detection operation method using one camera
CN113467625A (en) Virtual reality control device, helmet and interaction method
WO2021190421A1 (en) Virtual reality-based controller light ball tracking method on and virtual reality device
TW201913298A (en) Virtual reality system capable of showing real-time image of physical input device and controlling method thereof
JP2018045338A (en) Information processing method and program for causing computer to execute the information processing method
CN108268126B (en) Interaction method and device based on head-mounted display equipment
Cheng et al. Interaction Paradigms for Bare-Hand Interaction with Large Displays at a Distance
CN113516681A (en) Controller light ball tracking method based on virtual reality and virtual reality equipment
Singletary et al. Toward Spontaneous Interaction with the Perceptive Workbench

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21775813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21775813

Country of ref document: EP

Kind code of ref document: A1